text
stringlengths 100
500k
| subset
stringclasses 4
values |
---|---|
A hierarchical approach for building distributed quantum systems
Scalable distributed gate-model quantum computers
Laszlo Gyongyosi & Sandor Imre
Resource prioritization and balancing for the quantum internet
Circuit Depth Reduction for Gate-Model Quantum Computers
Optimal teleportation via noisy quantum channels without additional qubit resources
Dong-Gil Im, Chung-Hyun Lee, … Yoon-Ho Kim
Quantum network routing and local complementation
F. Hahn, A. Pappa & J. Eisert
Deterministic teleportation of a quantum gate between two logical qubits
Kevin S. Chou, Jacob Z. Blumoff, … R. J. Schoelkopf
Experimental demonstration of non-bilocality with truly independent sources and strict locality constraints
Qi-Chao Sun, Yang-Fan Jiang, … Jian-Wei Pan
Scalable algorithm simplification using quantum AND logic
Ji Chu, Xiaoyu He, … Dapeng Yu
Multi-qubit entanglement and algorithms on a neutral-atom quantum computer
T. M. Graham, Y. Song, … M. Saffman
Zohreh Davarzani1,2,
Mariam Zomorodi1,3 &
Mahboobeh Houshmand4
Scientific Reports volume 12, Article number: 15421 (2022) Cite this article
Nanoscience and technology
Quantum information
In this paper, a multi-layer hierarchical architecture is proposed for distributing quantum computation. In a distributed quantum computing (DQC), different units or subsystems communicate by teleportation in order to transfer quantum information. Quantum teleportation requires classical and quantum resources and hence, it is essential to minimize the number of communications among these subsystems. To this end, a two-level hierarchical optimization method is proposed to distribute the qubits among different parts. In Level I, an integer linear programming model is presented to distribute a monolithic quantum system into K balanced partitions which results in the minimum number of non-local gates. When a qubit is teleported to a destination part, it can be used optimally by other gates without being teleported back to the destination part. In Level II, a data structure is proposed for quantum circuit and a recursive function is applied to minimize the number of teleportations. Experimental results show that the proposed approach outperforms the previous ones.
In the recent decade, the rapid growth of science and the engineering of quantum devices have led to the advancement of quantum computation from single isolated quantum devices toward multi-qubit processors1. As such, quantum computation has witnessed rapid growth with high performance in many areas. The standard approach in quantum computing is to design them as monolithic circuits.
Nowadays, quantum computing has many advantages over classical ones. One of them is that quantum computers can exponentially act better than classical ones for many computational problems2. Yet, due to implementation complexity, there are many challenges to design a large-scale quantum computer. The computing power of a quantum system increases exponentially with the number of embedded qubits3. A problem with greater qubits is more challenging for a quantum computer to solve.
Though advantageous, quantum computers have many shortcomings. One of these shortcomings is that the information of qubits may encounter errors before applying fault-tolerant approaches. This is due to the qubits, interconnected by the outside world which may lead to decoherence4,5 and when the number of qubits increases, the quantum information becomes more fragile and more susceptible to errors6. The error could also be due to the application of an operation on a quantum state7 which can be solved by separating qubits from their surroundings. As qubits establish the communication and some reading or writing operation, this solution is not reasonable. There are many solutions for these challenges. Physical implementations such as systems of trapped atomic ions can be accurately controlled and manipulated. A large variety of interactions and measurements of relevant observables can be engineered with high precision8,9. Also, superconducting qubit modality has been used to demonstrate prototype algorithms in the noisy quantum channel to have non-error-corrected qubits in quantum algorithms. Currently, this is one of the approaches for implementing medium and large-scale quantum devices and quantum coherent interactions with low noise and high controllability10,11. Another technology used to design a large-scale quantum system is photonic quantum computing. Quantum entanglement, teleportation, and quantum key distribution are derived from this technology because photons present a quantum system with low noise and high performance12.
One way to resolve these challenges is to divide quantum systems into some limited-capacity quantum systems, with qubits distributed on them, which is referred to as a Distributed Quantum System13,14,15.
Distributed quantum circuit
Distributed quantum system consists of several independent quantum units with limited-capacity that appear as a single quantum system to the users. The units might be different from each other, in terms of hardware and software. For hardware limitations of each quantum unit, there is a connected graph called coupling map in each unit. The purpose of these limitations is to preserve and control qubits from decoherence and noise7.
Minimizing communications among quantum units of a distributed quantum system is very essential in reducing the cost of the whole system. On the other hand distribution of qubits among different subsystems leads to some non-local gates and to execute these non-local gates, it is essential to bring all qubits into a single subsystem. According to the no-cloning theorem, independent copies of qubits is not allowed in a quantum system. To this end, we can use teleportation protocol in order to move qubits between subsystems16. This protocol requires an entangled pair of qubits between two nodes in order to teleport the state of a qubit from one node to the other. This operation is expensive, and can lead to substantial latency due to the stochastic nature of underlying processes17,18,19. Therefore, minimizing communications among quantum units of a distributed quantum system is very essential in reducing the cost of the whole system. A proper distribution algorithm could decrease the communications between quantum units dramatically. With this in mind, this paper proposes the optimized distribution of quantum systems.
An abstraction of Distributed Quantum Computing is shown in Fig. 1 which is described through a set of (logical) layers, with the higher depending on the functionalities provided by the lower ones7. Starting from the top, there is the quantum algorithm in the form of quantum circuit. This algorithm is completely independent and unaware of logical and physical hardware constraints. In the second layer, there is a distribution algorithm. This algorithm implements the circuit of the previous layer in a distributed way. This layer consists of two parts called load balancer and optimizer, as follows:
The qubits must be distributed well-balanced in some the limited-capacity quantum units. Therefore, a load balancing problem must be performed at this level.
Non-local operations require qubits to communicate with qubits on other units. Hence, a teleportation protocol is needed for units to communicate. Minimizing the number of teleportations among these units is required at this level.
At the next level, quantum units communicate with each other via classical and quantum channels remotely. Both local and non-local operations can be executed at this level. The local operations execute on the qubits stored within the same quantum units and non-local operations execute on the qubits stored on different quantum units. As mention above, a quantum teleportation protocol is necessary for communication units with each other. This protocol consists of some phases such as, e.g. EPR pair generation, local operations, measurement and classical communications15. Each teleportation comprises two qubits stored on different units. These two qubits that are entangled together are called an entanglement pair. Each qubit of entanglement pairs is used to communicate a single qubit to another quantum device. Therefore, at the very bottom level, a hardware for generating entanglement pairs is required to communicate units with each other20. Each quantum device may have its own hardware to create an entanglement pair, or a separate device may generate this pair centrally20.
A multi-layer architecture of quantum circuit.
In this work, a two-level hierarchical optimization model is proposed to design a large-scale distributed quantum circuit. Hence, a monolithic quantum circuit is distributed to K quantum units. As such, minimizing communication between the K partitions is the objective. In Level I, an integer linear programming approach is proposed to distribute the qubits to K parts in a well-balanced manner. This minimizes the number of communications among these units. In this level, each non-local gate requires two teleportations because after a qubit is teleported to the destination, it is teleported back to its source. However, by teleporting one qubit of a non-local gate from the source to the destination, it may be used optimally in the destination by other non-local gates before being teleported back to its source. After the optimal utilization of the teleported-qubit, it can be returned to its source. Applying this concept can improve and minimize the number of teleportations. To this end, a recursive approach is proposed to consider for that in Level II. Therefore, through this hierarchical model, the required number of teleportations becomes fewer than the number of non-local gates.
The reminder of this paper is organized, as follows. "Related work" presents an overview of prior work. "The proposed algorithm" provides the proposed method in detail. Finally, "Experimental results" presents and discusses the experimental results.
Distributed Quantum Computing (DQC) has been studied for many years. Scaling small-sized quantum systems to large-scale ones has been the main goal of these studies. The first study on DQC was reported in21,22,23. In that study, the author proposed the some quantum systems have physically located far from each other and sent the required information to a base station. He showed that the overall computation time is faster, in proportion to the number of such distributed quantum systems.
Moreover, DQC has been used in many applications. In24, the authors considered two black boxes as two quantum devices and they were prevented from communicating with others and designed trusted quantum cryptography to share a random key with security, based on quantum physics. A practical application for quantum machine learning (QML) was presented in25. In this application, a distributed secure quantum machine learning was considered for the classical client to delegate a remote quantum machine learning to the quantum server with data privacy. In13, two main approaches, i.e. teledata and telegate were discussed. In the telegate approach, teleporting gates enables them to be executed remotely without requiring qubits to be nearby. In teledat, qubits transfer their states to other systems without moving them physically.
Squash26 proposed a gate partition method by using METIS27 as the partitioning tool. Moghadam et al.28 used the min-cut approach presented in29 to divide the graph of the quantum circuit into smaller units. In30, the authors used the modified version of the graph partitioning algorithm of31 to minimize interaction between qubits. The authors of32 presented an architecture for DQC. They partitioned the quantum circuit by the multilevel k-way hypergraph algorithm presented in33.
Most recently, one strategy to scale up the number of qubits has been the quantum internet34,35,36. Quantum internet is a network of quantum systems which are able to interconnect with each other remotely via quantum and classical links. Distributed quantum computing is used in this network. In fact, the quantum internet is considered as a virtual machine consisting of several qubits and is scaled with the number of quantum devices in the network. This concept may indicate the possibility of an exponential speed-up quantum computing power3,35. In3, authors considered the challenges and open problems of Quantum internet design. They highlighted the differences between quantum and classical networks and discussed the critical research and challenges in designing quantum communication networks.
At first Yimsiriwattana et.al., in37 showed, for any contiguous non-local CNOT gates in which have common control qubit, the control line needs to be distributed only once, because it can be reused. This idea allows the number of communications reduce.
An automated method for distributing quantum circuits to K balanced partitions was investigated in20. They reduced the problem to hypergraph partitioning. Their algorithm consisted of two steps: pre and post processing for improving circuit distribution. It implements any number of contiguous non-local CNOT gates that execute on the same control qubit with target qubits in the same partition. They noted the consecutive non-local CNOT with the mentioned character can be executed with one teleportation.
Zomorodi et.al., presented several works38,39,40,41,42,43,44,45 for optimizing and partitioning of quantum circuits. Davarzani et.al.38 presented a dynamic programming approach to distribute a quantum circuit to K parts to minimize the number of communications. Their approach consisted of two steps. In the first step, the quantum circuit was converted into a bipartite graph. And in the next step, the bipartite graph was distributed to K parts by a dynamic programming approach. In that study, they tried to minimize the number of non-local CNOTs by converting the problem into minimum K-cut problems.
In another study39, an algorithm was proposed for DQC, consisting of two separated and long-distance quantum systems. They examined different configurations for the execution of non-local gates. Also, they ran their proposed algorithm for each configuration to reach the number of required teleleportations. The minimum number of communications was found among all the configurations. But, their proposed method had an exponential complexity.
An approach based on genetic algorithm has been used in40 to distribute a quantum circuit into two partitions. The main purpose of the algorithm was to determine which qubit of a non-local gate should be teleported to the other system and when the teleported qubit should be returned back to its home partition. Also, in our another work41, we presented a two-phase algorithm based on NSGA-II to bi-partition the qubits in the first phase and suggested two heuristics to optimize the number of non-local gates in the second phase. The authors in42,44 also discussed the issue of reducing communication cost in a distributed quantum circuit composing of up to three-qubit gates and presented a new heuristic method to solve it.
An automated Windows-Based method was proposed in46. In that study, the gate and qubit teleportation concept were combined with each other to minimize communication cost efficiently.
The proposed algorithm
In this paper, we consider the problem of optimally distributing a given quantum circuit for evaluation over a set of subsystems and propose a two-level optimizer to reach a large-scale monolithic quantum circuit with the minimum number of required communications. The proposed method consists of two levels:
Level I: In this step, the number of subsystems and the quantum circuit are given as inputs and the labelling \(P:\{q_{i} |i\in \{1,...,N_{q}\}\}\rightarrow \{1,2,...,K\}\) of qubits to subsystems as output. Here, we partition the given circuit across distributed quantum circuits to reach near-balanced partitions of qubits. For this reason, an integer linear programming model is proposed to partition the quantum circuit into K parts. After distribution of qubits, some gates be non-local and each non-local gate requires two teleportation to the forward and backward qubits from source to destination units. Therefore, the number of communications is double equal to the number of non-local two-qubit gates obtained by this partitioning model.
Level II: At this level, the obtained partitioning of Level I is considered as input and the minimum number of required teleportations is reached as output. As mentioned above, in the previous level, for each non-local gate, two teleportations are needed for forwarding and backwarding communications. When one of the qubits of non-local gates is teleported to the destination partition, more gates are executed by teleporting this qubit without the need of teleporting it back immediately. In this turn, the number of teleportations reduces. In this level, this idea is considered to optimize the number of teleportations. The details of these levels are as follows. Also, we use the notation of Table 1 in this paper.
Table 1 The notation of the proposed algorithm.
Level I: the partitioning of quantum circuit
In this section, a K-way partitioning method is proposed to distribute a quantum circuit to K balanced partitions. This problem is a NP-hard problem and defined as follows:
Consider the undirected and weighted Graph \(G = (V, E)\), where V denotes the set of n vertices and E the set of edges. The balanced graph partitioning problem takes the Graph G(V, E), Parameter K as the number of partitions and Parameter \(\omega\) known as the load balance tolerance as inputs. We wish to partition the graph into K balanced disjoint parts or sub-graphs \((V_{1}, V_{2},...,V_{K})\) so that \(V={V_{1} \cup V_{2}\cup ... \cup V_{K }}\). Two criteria must be satisfied as follows:
Minimum number of cuts: the number of cuts among all the different sub-graphs is minimized as Eq. (1):
$$\begin{aligned} min \sum \limits _{k=1}^{K} \sum \limits _{l=k+1}^K \sum \limits _{v_{1}\in V_{k},v_{2}\in V_{l}} C({v_{1},v_{2}}) \end{aligned}$$
where \(C({v_{1},v_{2}})\) is the weight of edge \((v_{1},v_{2})\).
Load-balance: for all \(k=1,2,...,K\):
$$\begin{aligned} |V_k|\le \frac{(1+\omega )|V|}{K} \end{aligned}$$
As a combinational problem, many heuristic approaches are mostly used to the graph partitioning to the need acceptable computation time. We reduced the problem of balanced distribution of quantum circuit to the problem of balanced graph partitioning so that qubits and gates are the nodes and edges in graph respectively. We proposed an integer linear programming model for \(K-way\) partitioning of quantum circuits. Let the quantum circuit consist of two sets, i.e. \(Q=\{q_{i} |i \in {1,...,N_{q}}\}\) and \({\mathcal {G}} =\{g_{j} |j\in {1,...,N_{2qubit}}\}\) where set \({\mathcal {G}}\) is the set of two-qubit gates. Each \(g_j\) operates on two Qubits \(q_{i_{1}}\) and \(q_{i_{2}}\) and has been shown as \(g_j(q_{i_{1}},q_{i_{2}})\). The binary variables of the proposed mathematical model are as Eqs. (3) and (4):
$$\begin{aligned} f_{j}= & {} {\left\{ \begin{array}{ll} 1&{} \text {if }g_j\hbox { is\, a\, non-local\, gate}\\ 0&{} \text {otherwise} \end{array}\right. } \end{aligned}$$
$$\begin{aligned} p_{i,k}= & {} {\left\{ \begin{array}{ll} 1&{} \text {if }q_{i}\hbox { has\, been\, located \,on \,Part\, }k\\ 0&{} \text {otherwise} \end{array}\right. } \end{aligned}$$
The binary variable \(f_{j}\) is set to one when a two-qubit Gate \(g_{j}(i_1,i_2)\) is a non-local gate and Qubits \(i_1\) and \(i_2\) have been located on the different parts and zero otherwise (local gate). Also the binary Variable \(p_{i,k}\) is determined whether \(q_{i}\) be located to the Part k or not. The proposed model is given in Eqs. (5) to (9):
$$\begin{aligned}&\min \sum \limits _{j=1}^{N_{2qubit}} f_{j} \end{aligned}$$
$$\begin{aligned}&\sum \limits _{i=1}^{N_{q}} p_{i,k} \le (1+\omega )|Q| /K \quad \forall k=1,...,K \end{aligned}$$
$$\begin{aligned}&\sum \limits _{k=1}^{K} p_{i,k}=1 \quad \forall i=1,...,N_q \end{aligned}$$
$$\begin{aligned}&f_{j}\ge p_{i_2,k}-p_{i_1,k} \quad \forall k=1,...,K, \quad g_j(i_{1},i_{2}) \in \mathcal {G} \end{aligned}$$
S.t \(f_{j} \in \{0,1\}, p_{i_1,k}\in \{0,1\} \quad \forall i=1,...N_{q}, j=1,...,N_{2qubit}, k=1,...,K\)
Equation (5) determines the objective function. In this problem, the number of non-local gates is considered as the objective function. Load balancing criteria is considered in Eq. (6). Equation (7) ensures that a qubit is assigned to exactly one unit. Equations (8) and (9) guarantee that non-local gates are correctly accounted.
The proposed model distributes the quantum circuit into K balanced units. This distribution involves mapping the qubits of circuit into K subsystems. The output should be a labelling \(f: Q \rightarrow \{1,...,K\}\) of qubits to satisfying two criteria given in Eqs. (1) and (2). This function maps the qubits to a set of labels \(P=\{p_1,p_2,...,p_K\}\). These labels are as input of Level II.
Level II: the optimization level
After partitioning of Level I, qubits are distributed into K units according to the obtained labeling of the previous level. As stated earlier, each non-local gate needs two teleportations for executing. It is clear that in many cases, teleporting a qubit from its source partition to the destination partition, known as the migrated qubit, makes it optimally available to use by other gates without the need to teleport it back to its own partition. After that, the migrated qubit is teleported back to its home partition. At this level, we propose a recursive approach to implement this issue and minimize the total number of teleportations.
In this level, we present a data structure for representing quantum circuits. This structure is a two-dimensional matrix called \(C_{N_{q}\times N_{g}}\) with \(N_q\) rows and \(N_g\) columns and defined as follows:
Qubits are located on the rows and numbered from one to \(N_{q}\), where the ith row indicates Qubit \(q_{i}\).
Gates are located on the columns and are numbered in the order of their executions in the quantum circuit.
Element \(C_{i,j} \quad (1\le i\le N_{q}, 1\le j\le N_{g})\) consists of two components: (index, label). index is the qubit that communicates with ith qubit in jth gate and label is the type of this qubit in which is 'control' or 'target' in two-qubit gates or 'non' in one-qubit gates. These elements are constructed as follows:
For each two-qubit gate \(g_{i} (q_{t},q_{c})\), \(C_{t,i}=(q_c,\)'c') and \(C_{c,i}=(q_t,\)'t').
For each one-qubit gate \(g_{i} (q_{j})\), \(C_{j,i}=(q_j,non)\).
Other elements are quantified by zero.
For example, we consider a quantum circuit with 4 qubits and 7 gates in Fig. 2a so that its corresponding matrix is given in Fig. 2b.
(a) A sample of quantum circuit, (b) and its corresponding matrix.
Algorithm I presents the main algorithm. In this algorithm, we used an array called run with Size \(N_{g}\) in which run[i] indicates the status of the ith gate in which it has/has not been executed. The algorithm starts from the first gate or column of C (Index s). It may indicate one of the following three conditions:
Column s indicates a local two-qubit gate.
Column s is a one-qubit gate.
Column s indicates a non-local gate.
In the first two cases, no teleportation is required and these gates are executed and run[s] is set to one (Lines 5–6 of the main algorithm). Otherwise, Gate \(g_s\) is a non-local gate and a teleportation is required for the executing of \(g_s\) . Then the teleportation cost is increased by two (Line 10) in which one additional teleportation must be accounted for transferring the qubit back to its source part. Then Function \(Find\_qubits(g_s)\) finds two qubits of Gate \(g_{s}\) called Qubits \(index_{1,s}\) and \(index_{2,s}\). One of these qubits called \(q\_teleport\) which led to the minimum number of teleportations, is selected (Line 12). This qubit is teleported from its own part to the destination to execute gate \(g_s\). The algorithm tracks the whole circuit to find the gate that can be executed without returning \(q\_teleport\) to its source. This means the teleported qubit is optimally used by the other gates which require \(q\_teleport\) and can be executed.
Let Gate \(g_{d}\) in Column d be the first local two-qubit gate called \(g_{d} (index_{1,d},index_{2,d})\) in whis has common Qubit \(q\_teleport\) with Gate \(g_s\). This gate must be considered whether it can be executed or not. Function \(Execute(g_{s},g_{d},q\_teleport, run)\) is a recursive function and considers by teleporting \(q\_teleport\), Gate \(g_{d}\) can be executed or not. This function is shown in Algorithm II. Three states may occur in this function as follows:
The function returns False when there is at least a non-executed and non-local gate between \(g_{s}\) and \(g_{d}\) which has not been executed before \(g_{d}\) and the execution of \(g_d\) depends on it. Let Column \(k (s< k <d)\) as \(g_{k} (index_{1,k},index_{2,k})\) be the first non-executed and non-local gate before Column d in which has a common qubit with Gate \(g_d\). This column has two non-zero rows \(index_{1,d}\) and \(index_{2,d}\). This function returns False ( Line 11 of Algorithm II) and stops due to the following condition:
$$\begin{aligned}& index_{i,k} = index_{j,d} \quad \& \& \\& P_{index_{\{1,2\} - i,k}} \ne P_{index_{\{1,2\}-j,d}} \quad \& \& \\& \biggl (C[k,index_{i,k}].label \ne C[d,index_{j,d}].label\quad \Vert \\& C[k,index_{i,k}].label = C[d,index_{j,d}].label==`t`\biggr ) \exists i,j \in \{1,2\} \end{aligned}$$
Equation (10) indicates one of the qubits of \(g_{k}\) is the same as the qubits of \(g_{d}\) with a different label or the same Label 't' and another qubit of \(g_{k}\) and \(g_d\) has been located on the different partitions. In this case, another teleportation is required to execute \(g_{k}\) and the function returns False, as a result. Figure 3a shows this concept. In this example, \(q_{1}\) is teleported from \(P_{1}\) to \(P_{3}\) to execute \(g_s\). By this teleporting, executing of Gate \(g_{d}\) should be considered by Function Execute. This function finds non-executed and non-local Gate \(g_{k}\) before Gate \(g_{d}\) in which have common qubit \(q_{1}\) with a different label. Since execution of Gate \(g_d\) depends on execution of Gate \(g_{k}\) and Gate \(g_k\) is a non-local gate, Then Gate \(g_d\) cannot execute and Function Execute returns False.
Sometimes Gate \(g_{k}\) may be a non-local gate in which has a common qubit with Gate \(g_{d}\) with Label 'c'. In this case, the execution of Gate \(g_{d}\) is independent the execution of Gate \(g_{k}\). This in turn, non-execution Gate \(g_{k}\) prevented to execution of Gate \(g_d\) and the execution of other previous gates of \(g_{d}\) are considered (Lines 7–9) . Equation (11) indicates this state.
$$\begin{aligned} index_{i,k}= index_{j,d}\quad \& \& ({C[ {k,inpu{t_{i,k}}} ].label =C[{d,inpu{t_{j,d}}} ].label = `c`} ) \quad \exists i,j \in \{1,2\} \end{aligned}$$
This state is shown in Fig. 3b.
There are no gates between Gate \(g_{s}\) and Gate \(g_{d}\) to prevent the execution of Gate \(g_d\). In this case, this function returns True (Lines 13–14).
If Gate \(g_{k}\) does not meet any of the conditions of Eqs. (10) and (11), Function \(Execute(g_s,g_k,q\_teleport,run)\) is called recursively to consider if \(g_{k}\) is executed or not (Line 19).
(a) By teleporting \(q_{1}\) to \(P_{3}\), Gate \(g_{d}\) cannot be executed. (b) By teleporting Qubit \(q_{1}\) to \(P_{3}\), Gate \(g_{d}\) can be executed.
The proposed method is explained by an example. Figure 4a shows quantum circuit 2–4 dec given from Revlib47. This circuit consists of six qubits and 27 gates. Our algorithm distributes this circuit into three partitions each containing two qubits. At first, Level I of proposed method distributes this circuit as shown in Fig. 4b and Array P is quantified as [3,3,2,1,2,1]. In this level, the number of non-local gates is obtained 13 and then total number of communications is set to 26. Table 2 demonstrates the steps of Level II of our method on this circuit. In this table, \(g_s\), status of \(g_s\) (one-qubit/ local/ non-local gate) and qubit which is teleported (\(q\_teleport\)) are given in Column 2. In Column 3, the partition that \(q\_{teleport}\) is teleported to it (destination partition), \(g_d\) and the partition that \(q\_teleport\) is teleported back to it (source partition) are depicted respectively. Also the Array run that indicates i-th gate is executed or not is shown in Column 4 and array P is given in the last column. The steps of Level II is as following:
Step 1: \(g_1\) to \(g_6\) are one-qubit gates and no teleportation is required. Then \(run[i]=1,\{i=1,...,6\}\).
Step 2: \(g_7(q_1,q_4)\) is a non-local gate and \(q_1\) is teleported to \(P_1\). \(g_{10}\) is the first gate which has common qubit \(q_1\) with \(g_7\). Since \(g_{10}\) is dependent to \(g_9\) and \(run[9]=0\), \(g_{10}\) could not be executed. Therefor \(g_7\) is only executed and \(run[7]=1\). Then \(q_1\) is teleported back to \(P_3\);
Step 3: \(g_8(q_3,q_4)\) is a non-local gate and \(q_3\) is teleported to \(P_1\). \(g_9\) is the first gate which has common qubit \(q_3\) with \(g_8\). Then \(run[i]=1,i=\{8,9\}\). Other gates could not be executed and \(q_3\) is teleported back to its source partition (\(P_2\)).
Step 4: \(g_{10}(q_1,q_4)\) is a non-local gate and \(q_1\) is teleported to \(P_1\). Any gate has common qubit with \(g_{10}\). Then \(run[10]=1\) and \(q_1\) is teleported back to \(P_3\).
Step 6: \(g_{12}(q_3,q_4)\) is a non-local gate and \(q_4\) is teleported to \(P_2\). \(g_{13}\) is the first gate which has common qubit \(q_4\) with \(g_{12}\). Therefor \(g_{13}\) is only executed and \(run[i]=1,i=\{12,13\}\). Then \(q_4\) is teleported back to \(P_1\).
Step 7: \(g_{14}(q_2,q_3)\) is a non-local gate and \(q_2\) is teleported to \(P_2\). \(g_{18}\) and \(g_{17}\) have common qubit \(q_2\) with \(g_{14}\). These gates are dependent to \(g_{15}\) and \(g_{16}\) which are local gates and could be executed. Then \(run[i]=1, i=\{14,...,18\}\) and \(q_2\) is teleported back to \(P_3\).
Steps 8 and 9: \(g_{19}\) and \(g_{20}\) are local gates and executed. Then \(run[i]=1,i=\{19,20\}\).
Step 10: \(g_{21}(q_2,q_4)\) is a non-local gate and \(q_2\) is teleported to \(P_1\). Then \(run[i]=1,i=\{21,...25\}\). Then \(q_2\) is teleported back to \(P_3\).
Steps 11 and 12: \(g_{26}\) and \(g_{27}\) are local gates and are executed. Then \(run[i]=1,i=\{26,27\}\).
As shown above, each of Steps 1, 2, 3, 4, 5, 6 and 9 require to two teleportations. Then the total number of teleportations is 14 for this circuit.
(a) Circuit 2–4 dec. (b) The obtained circuit from applying Level I. The gate \(g_{i},i=\{7,8,9,10,11,12,13,14,17,18,21,24,25\}\) are non-local gates.
Table 2 The steps of distribution Circuit 2-4dec into 3 partitions with 7 qubits and 27 gates.
We implemented our method in MATLAB on a Core i7 CPU operating at 1.8 GHz with 8 GB of memory. We used many circuits to compare the performance of the proposed method with previous approaches: that of39, the dynamic programming approach of38, the evolutionary algorithm of40, the automated approach of20 and the windows-based method of46. The benchmark circuits are given from48 (the circuits from 1 to10), Revlib47 (the circuits from 11 to 15 and 26 to 31), some quantum error-correction encoding circuits49(the circuits from 16 to 25) and n-qubit Quantum Fourier Transform circuits (QFT)50 where \(n \in \{16, 32, 64, 128, 256\}\). The benchmark circuits include some of the gates of the gate library synthesized following the method in51. In this paper CNOT, CZ and one qubit gates are considered as the gate library.
To put the quality of results into perspective, the standard deviation criterion is employed as Eq. (12):
$$\begin{aligned} Dev=\frac{T_{ap}-T_{best}}{T_{best}}*100 \end{aligned}$$
Where \(T_{best}\) is the best number of teleportations obtained among all of approaches and \(T_{ap}\) is the obtained number of teleportations of approach that we compare ours to.
First, Table 3 shows the number of teleportations in comparison with the windows-based approach of46. In this table, the number of qubits, gates and partitions are given in Columns 3, 4 and 5. Also Columns 6, 7, 8 and 9 report the number of teleportations and Dev of the proposed method and method46, respectively. As shown in this table, except Circuits 2-4dec, Cycle17_3, Ham15-D3, Ham7_106 and Parity247, Dev for our approach have zero value and demonstrate the proposed method outperformed that of46 to reach minimum number of teleportations in these circuits.
Table 3 The number of teleportations \((N_t)\) and Dev of the proposed algorithm in comparison with the method of46.
First, it is important to demonstrate how applying Level II to the partitioning of Level I improves the number of communication. As mentioned before in that section, in Level I of the proposed algorithm, two teleportations are needed to execute each non-local gate because after a qubit is teleported to the destination home, it is teleported back to its source. Also, the proposed algorithm on level II allowed to teleported qubit to used optimally in the destination home. Therefore, it can save many number of quantum teleportations. Figure 5a shows the effectiveness of applying Level II to Level I to decrease the number of teleportations for Circuits 1 to 15. As can be seen, the bottom bar (the blue bar) indicates the required number of communications after applying Level II to these benchmarks and the top bar (the orange bar) indicates extra teleportations without applying Level II. As shown in this figure, in all of cases, over 70% of non-local gates could be implemented locally and the Level II reduces the number of teleportations to less than half in all of the samples.
(a) Percentage of required number of teleportations to extra teleportations on Circuits 1 to 15. (b) The effect of the number of units on number of communications of Level I and II on Circuit Hwb50.
In another test, we considered the impact of the number of subsystems on the number of teleportations. A near-balanced distribution of qubits over more quantum circuits requires more communications. Figure 5b demonstrates the effect of the number of units (K) on the required number of teleportations on Circuit Hwb50 with 56 qubits and 6430 gates. In this figure, qubits are distributed across {2, 3,..., 7} units. As shown in this figure, an increase in the number of partitions used to distribute qubits requires more communications among them and a large number of teleportations is used. Lines Blue and Orange show the obtained number of teleportations before and after applying Level II to Circuit Hwb50, respectively.
Second, we tested our method on another benchmark (numbered 16–25) and compared with method of46. These results are demonstrated in Table 4. The best obtained results are marked in bold. Except of three circuits, our approach has outperformed in comparison to46.
Table 4 Comparison the number of teleportations of proposed method (\(N_t\)) with proposed approach of46 on Circuits 16 to 25.
Third, we ran our method on QFT circuit in comparison with the method of20. We distributed the quantum circuit across {4,6,8,...16} quantum devices. Also, the \(N_q\) and \(N_g\) are 201 and 19900, respectively. Figure 6 shows the proportion of the number of teleportations over the total number of two-qubit gates for our approach and approach of20. As shown in this figure, this ratio grows by increasing the number of partitions. Also our approach has acted better than method20 in all cases in terms of the ratio between the number of teleportations and the number of two qubit gates. Since, the proposed approach considers all of the configurations to execute more non-local gates, it found the minimum number of communications in comparison with the approach of20 in which they implemented a group of non-local gates with a common control qubit only. As shown in this figure, when QFT is distributed in \(K=\{4,6,8\}\), the number of teleportations obtained by the proposed approach have many differences from the approach of20, but two methods acted almost identically for \(K=\{10,12,14,16\}\).
The proportion of the required teleportations over the total two-qubit gates when QFT circuit is distributed across 4,6,8,...16 quantum devices in comparison with20.
In another test, we demonstrate the effectiveness of load-balance tolerance (\(\omega\)) on the number of non-local gates in Level I. Figure 7 shows the number of non-local gates for various \(\omega =\{0.1,...,0.9\}\) on one sample circuit. As shown in this figure, \(N_{non}\) is reduced by increasing the load-balance tolerance. According to Eq. (6), when Factor \(\omega\) increases, the qubits that have many communications with each other, are located in the same partition. Therefore, the number of non-local gates is reduced.
The effective of load-balance tolerance (\(\omega\)) on \(N_{non}\).
Table 5 The teleportation cost of the proposed method (\(N_t\)) on Circuits 26 to 31 in comparison with of38,39,40.
Another set of the test samples was taken from Revlib to compare the proposed method with the other approaches38,39,40 such as: Alu_primitive, Parity, Flip_flop, Sym9_147 (the circuits 26 to 31). The number of qubits, gates and partitions are given in Columns 2, 3 and 4 of Table 5 respectively. Also Columns 5, 6 and 7 report the number of teleportations of38,39,40 too. The last column shows the obtained number of the teleportations of the proposed approach. As can be seen, the proposed method outperformed the other approaches.
In this paper, a two-level hierarchical architecture of distributed quantum computing was proposed to build large quantum systems in which the number of communications among quantum subsystems is minimized. In the first level, an integer linear programming model was proposed to distribute the qubits to K balanced subsystems. In the second level, we presented a new data structure for representing quantum circuits. Also, according to the partitioning of the first level, when one of the qubits of a non-local gate is teleported from its source subsystem to the destination, it is used optimally by other gates in the destination subsystem before being teleported back to its own subsystem. Moreover, we proposed a recursive method to optimize the number of teleportations. Finally, we ran the proposed method on the different benchmarks and showed that it produces better results in comparison with the previous ones.
Krantz, P. et al. A quantum engineer's guide to superconducting qubits. Appl. Phys. Rev. 6, 021318 (2019).
Huang, H.-L. et al. Experimental blind quantum computing for a classical client. Phys. Rev. Lett. 119, 050503 (2017).
Cacciapuoti, A. S. et al. Quantum internet: Networking challenges in distributed quantum computing. IEEE Netw. (2019).
Cacciapuoti, A. S., Caleffi, M. & Van Meter, R. & Hanzo, L. Quantum teleportation for the quantum internet. in IEEE Transactions on Communications, When Entanglement Meets Classical Communications (2020).
Cacciapuoti, A. S. & Caleffi, M. Toward the quantum internet: A directional-dependent noise model for quantum signal processing. in ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). 7978–7982 (IEEE, 2019).
Krojanski, H. G. & Suter, D. Scaling of decoherence in wide NMR quantum registers. Phys. Rev. Lett. 93, 090501 (2004).
Cuomo, D., Caleffi, M. & Cacciapuoti, A. S. Towards a distributed quantum computing ecosystem. arXiv preprintarXiv:2002.11808 (2020).
Blatt, R. & Roos, C. F. Quantum simulations with trapped ions. Nat. Phys. 8, 277–284 (2012).
Bruzewicz, C. D., Chiaverini, J., McConnell, R. & Sage, J. M. Trapped-ion quantum computing: Progress and challenges. Appl. Phys. Rev. 6, 021314 (2019).
Kjaergaard, M. et al. Superconducting qubits: Current state of play. Annu. Rev. Condens. Matter Phys. 11, 369–395 (2020).
Huang, H.-L., Wu, D., Fan, D. & Zhu, X. Superconducting quantum computing: A review. Sci. China Inf. Sci. 63, 1–32 (2020).
ADS MathSciNet Google Scholar
Slussarenko, S. & Pryde, G. J. Photonic quantum information processing: A concise review. Appl. Phys. Rev. 6, 041303 (2019).
Van Meter, R., Ladd, T. D., Fowler, A. G. & Yamamoto, Y. Distributed quantum computation architecture using semiconductor nanophotonics. Int. J. Quantum Inf. 8, 295–323 (2010).
Monroe, C. et al. Large-scale modular quantum-computer architecture with atomic memory and photonic interconnects. Phys. Rev. A 89, 022317 (2014).
Ahsan, M., Meter, R. V. & Kim, J. Designing a million-qubit quantum computer using a resource performance simulator. ACM J. Emerg. Technol. Comput. Syst. (JETC) 12, 1–25 (2015).
Bennett, C. H. et al. Teleporting an unknown quantum state via dual classical and Einstein-Podolsky-Rosen channels. Phys. Rev. Lett. 70, 1895 (1993).
Article ADS MathSciNet CAS Google Scholar
Duan, L.-M., Lukin, M. D., Cirac, J. I. & Zoller, P. Long-distance quantum communication with atomic ensembles and linear optics. Nature 414, 413–418 (2001).
Sangouard, N., Simon, C., De Riedmatten, H. & Gisin, N. Quantum repeaters based on atomic ensembles and linear optics. Rev. Mod. Phys. 83, 33 (2011).
G Sundaram, R., Gupta, H. & Ramakrishnan, C. Efficient distribution of quantum circuits. in 35th International Symposium on Distributed Computing (DISC 2021) (Schloss Dagstuhl-Leibniz-Zentrum für Informatik, 2021).
Andrés-Martínez, P. Automated distribution of quantum circuits. Theor. Comput. Sci. 410, 2489–2510 (2018).
Grover, L. K. Quantum telecomputation. arXiv preprint arXiv:quant-ph/9704012 (1997).
Cirac, J., Ekert, A., Huelga, S. & Macchiavello, C. Distributed quantum computation over noisy channels. Phys. Rev. A 59, 4249 (1999).
Cleve, R. & Buhrman, H. Substituting quantum entanglement for communication. Phys. Rev. A 56, 1201 (1997).
Reichardt, B. W., Unger, F. & Vazirani, U. Classical command of quantum systems. Nature 496, 456–460 (2013).
Sheng, Y.-B. & Zhou, L. Distributed secure quantum machine learning. Sci. Bull. 62, 1025–1029 (2017).
Dousti, M. J., Shafaei, A. & Pedram, M. Squash 2: A hierarchical scalable quantum mapper considering ancilla sharing. arXiv preprintarXiv:1512.07402 (2015).
Karypis, G. & Kumar, V. A fast and high quality multilevel scheme for partitioning irregular graphs. SIAM J. Sci. Comput. 20, 359–392 (1998).
Moghadam, M. C., Mohammadzadeh, N., Sedighi, M. & Zamani, M. S. A hierarchical layout generation method for quantum circuits. in The 17th CSI International Symposium on Computer Architecture & Digital Systems (CADS 2013). 51–57 (IEEE, 2013).
Breuer, M. A. A class of min-cut placement algorithms. in Proceedings of the 14th Design Automation Conference. 284–290 (1977).
Wang, G. & Khainovski, O. A fault-tolerant, ion-trap-based architecture for the quantum simulation algorithm. Measurement 10, 10–4 (2010).
Stoer, M. & Wagner, F. A simple min-cut algorithm. J. ACM (JACM) 44, 585–591 (1997).
Sargaran, S. & Mohammadzadeh, N. Saqip: A scalable architecture for quantum information processors. ACM Trans. Architect. Code Optim. (TACO) 16, 1–21 (2019).
Karypis, G. & Kumar, V. Multilevel k-way hypergraph partitioning. VLSI Des. 11, 285–300 (2000).
Kimble, H. J. The quantum internet. Nature 453, 1023–1030 (2008).
Caleffi, M., Cacciapuoti, A. S. & Bianchi, G. Quantum internet: From communication to distributed computing! in Proceedings of the 5th ACM International Conference on Nanoscale Computing and Communication. 1–4 (2018).
Bourzac, K. 4 tough chemistry problems that quantum computers will solve [news]. IEEE Spectrum 54, 7–9 (2017).
Yimsiriwattana, A. & Lomonaco Jr, S. J. Generalized ghz states and distributed quantum computing. arXiv preprintarXiv:quant-ph/0402148 (2004).
Davarzani, Z., Zomorodi-Moghadam, M., Houshmand, M. & Nouri-baygi, M. A dynamic programming approach for distributing quantum circuits by bipartite graphs. Quantum Inf. Process. 19, 1–18 (2020).
Zomorodi-Moghadam, M., Houshmand, M. & Houshmandi, M. Optimizing teleportation cost in distributed quantum circuits. Theor. Phys. 57, 848–861 (2018).
Zahra Mohammadi, M. Z.-M., Houshmand, M. & Houshmandi, M. An evolutionary approach to optimizing communication cost in distributed quantum computation. arXiv (2019).
Ghodsollahee, I. et al. Connectivity matrix model of quantum circuits and its application to distributed quantum circuit optimization. Quantum Inf. Process. 20, 1–21 (2021).
Daei, O., Navi, K. & Zomorodi-Moghadam, M. Optimized quantum circuit partitioning. Int. J. Theor. Phys. 59, 3804–3820 (2020).
Dadkhah, D., Zomorodi, M., Hosseini, S. E., Plawiak, P. & Zhou, X. Reordering and partitioning of distributed quantum circuits. IEEE Access. 10, 70329–70341. https://doi.org/10.1109/ACCESS.2022.3186485 (2022).
Daei, O., Navi, K. & Zomorodi, M. Improving the teleportation cost in distributed quantum circuits based on commuting of gates. Int. J. Theor. Phys. 60(9), 3494–3513. https://doi.org/10.1007/s10773-021-04920-y (2021).
Dadkhah, D., Zomorodi, M. & Hosseini, S. E. A new approach for optimization of distributed quantum circuits. Int. J. Theor. Phys. 60(9), 3271–3285. https://doi.org/10.1007/s10773-021-04904-y (2021).
Nikahd, E., Mohammadzadeh, N., Sedighi, M. & Zamani, M. S. Automated window-based partitioning of quantum circuits. Phys. Scr. 96, 035102 (2021).
Wille, R., Große, D., Teuber, L., Dueck, G. W. & Drechsler, R. Revlib: An online resource for reversible functions and reversible circuits. in 38th International Symposium on Multiple Valued Logic (ISMVL 2008). 220–225 (IEEE, 2008).
Maslov, D. Reversible logic synthesis benchmarks page. http://www.cs.uvic.ca/maslov/ (2005).
Cross, A. W., DiVincenzo, D. P. & Terhal, B. M. A comparative code study for quantum fault-tolerance. arXiv preprintarXiv:0711.1556 (2007).
Fowler, A. G. & Hollenberg, L. C. Scalability of Shor's algorithm with a limited set of rotation gates. Phys. Rev. A 70, 032329 (2004).
Article ADS MathSciNet Google Scholar
Barenco, A. et al. Elementary gates for quantum computation. Phys. Rev. A 52, 3457–3467 (1995).
Department of Computer Engineering, Ferdowsi University of Mashhad, Mashhad, Iran
Zohreh Davarzani & Mariam Zomorodi
Department of Computer Engineering, Payame Noor University, Tehran, Iran
Zohreh Davarzani
Department of Computer Science, Faculty of Computer Science and Telecommunications, Cracow University of Technology, Krakow, Poland
Mariam Zomorodi
Department of Computer Engineering, Mashhad Branch, Islamic Azad University, Mashhad, Iran
Mahboobeh Houshmand
Z.D. contributed in writing the main manuscript text and M.Z. and M.H. contributed in revising, verifying the results, and improving the writing of the manuscript.
Correspondence to Mariam Zomorodi.
Davarzani, Z., Zomorodi, M. & Houshmand, M. A hierarchical approach for building distributed quantum systems. Sci Rep 12, 15421 (2022). https://doi.org/10.1038/s41598-022-18989-w
|
CommonCrawl
|
What intuitive explanation is there for the central limit theorem?
Modified 10 months ago
In several different contexts we invoke the central limit theorem to justify whatever statistical method we want to adopt (e.g., approximate the binomial distribution by a normal distribution). I understand the technical details as to why the theorem is true but it just now occurred to me that I do not really understand the intuition behind the central limit theorem.
So, what is the intuition behind the central limit theorem?
Layman explanations would be ideal. If some technical detail is needed please assume that I understand the concepts of a pdf, cdf, random variable etc but have no knowledge of convergence concepts, characteristic functions or anything to do with measure theory.
central-limit-theorem
user28user28
$\begingroup$ Good question, although my immediate reaction, backed up by my limited experience of teaching this, is that the CLT isn't initially at all intuitive to most people. If anything, it's counter-intuitive! $\endgroup$
– onestop
Oct 19, 2010 at 2:39
$\begingroup$ @onestop AMEN! staring at the binomial distribution with p = 1/2 as n increases does show the CLT is lurking - but the intuition for it has always escaped me. $\endgroup$
– ronaf
$\begingroup$ Similar question with some nice ideas: stats.stackexchange.com/questions/643/… $\endgroup$
$\begingroup$ Not an explanation but this simulation can be helpful understanding it. $\endgroup$
– David Lane
I apologize in advance for the length of this post: it is with some trepidation that I let it out in public at all, because it takes some time and attention to read through and undoubtedly has typographic errors and expository lapses. But here it is for those who are interested in the fascinating topic, offered in the hope that it will encourage you to identify one or more of the many parts of the CLT for further elaboration in responses of your own.
Most attempts at "explaining" the CLT are illustrations or just restatements that assert it is true. A really penetrating, correct explanation would have to explain an awful lot of things.
Before looking at this further, let's be clear about what the CLT says. As you all know, there are versions that vary in their generality. The common context is a sequence of random variables, which are certain kinds of functions on a common probability space. For intuitive explanations that hold up rigorously I find it helpful to think of a probability space as a box with distinguishable objects. It doesn't matter what those objects are but I will call them "tickets." We make one "observation" of a box by thoroughly mixing up the tickets and drawing one out; that ticket constitutes the observation. After recording it for later analysis we return the ticket to the box so that its contents remain unchanged. A "random variable" basically is a number written on each ticket.
In 1733, Abraham de Moivre considered the case of a single box where the numbers on the tickets are only zeros and ones ("Bernoulli trials"), with some of each number present. He imagined making $n$ physically independent observations, yielding a sequence of values $x_1, x_2, \ldots, x_n$, all of which are zero or one. The sum of those values, $y_n = x_1 + x_2 + \ldots + x_n$, is random because the terms in the sum are. Therefore, if we could repeat this procedure many times, various sums (whole numbers ranging from $0$ through $n$) would appear with various frequencies--proportions of the total. (See the histograms below.)
Now one would expect--and it's true--that for very large values of $n$, all the frequencies would be quite small. If we were to be so bold (or foolish) as to attempt to "take a limit" or "let $n$ go to $\infty$", we would conclude correctly that all frequencies reduce to $0$. But if we simply draw a histogram of the frequencies, without paying any attention to how its axes are labeled, we see that the histograms for large $n$ all begin to look the same: in some sense, these histograms approach a limit even though the frequencies themselves all go to zero.
These histograms depict the results of repeating the procedure of obtaining $y_n$ many times. $n$ is the "number of trials" in the titles.
The insight here is to draw the histogram first and label its axes later. With large $n$ the histogram covers a large range of values centered around $n/2$ (on the horizontal axis) and a vanishingly small interval of values (on the vertical axis), because the individual frequencies grow quite small. Fitting this curve into the plotting region has therefore required both a shifting and rescaling of the histogram. The mathematical description of this is that for each $n$ we can choose some central value $m_n$ (not necessarily unique!) to position the histogram and some scale value $s_n$ (not necessarily unique!) to make it fit within the axes. This can be done mathematically by changing $y_n$ to $z_n = (y_n - m_n) / s_n$.
Remember that a histogram represents frequencies by areas between it and the horizontal axis. The eventual stability of these histograms for large values of $n$ should therefore be stated in terms of area. So, pick any interval of values you like, say from $a$ to $b \gt a$ and, as $n$ increases, track the area of the part of the histogram of $z_n$ that horizontally spans the interval $(a, b]$. The CLT asserts several things:
No matter what $a$ and $b$ are, if we choose the sequences $m_n$ and $s_n$ appropriately (in a way that does not depend on $a$ or $b$ at all), this area indeed approaches a limit as $n$ gets large.
The sequences $m_n$ and $s_n$ can be chosen in a way that depends only on $n$, the average of values in the box, and some measure of spread of those values--but on nothing else--so that regardless of what is in the box, the limit is always the same. (This universality property is amazing.)
Specifically, that limiting area is the area under the curve $y = \exp(-z^2/2) / \sqrt{2 \pi}$ between $a$ and $b$: this is the formula of that universal limiting histogram.
The first generalization of the CLT adds,
When the box can contain numbers in addition to zeros and ones, exactly the same conclusions hold (provided that the proportions of extremely large or small numbers in the box are not "too great," a criterion that has a precise and simple quantitative statement).
The next generalization, and perhaps the most amazing one, replaces this single box of tickets with an ordered indefinitely long array of boxes with tickets. Each box can have different numbers on its tickets in different proportions. The observation $x_1$ is made by drawing a ticket from the first box, $x_2$ comes from the second box, and so on.
Exactly the same conclusions hold provided the contents of the boxes are "not too different" (there are several precise, but different, quantitative characterizations of what "not too different" has to mean; they allow an astonishing amount of latitude).
These five assertions, at a minimum, need explaining. There's more. Several intriguing aspects of the setup are implicit in all the statements. For example,
What is special about the sum? Why don't we have central limit theorems for other mathematical combinations of numbers such as their product or their maximum? (It turns out we do, but they are not quite so general nor do they always have such a clean, simple conclusion unless they can be reduced to the CLT.) The sequences of $m_n$ and $s_n$ are not unique but they're almost unique in the sense that eventually they have to approximate the expectation of the sum of $n$ tickets and the standard deviation of the sum, respectively (which, in the first two statements of the CLT, equals $\sqrt{n}$ times the standard deviation of the box).
The standard deviation is one measure of the spread of values, but it is by no means the only one nor is it the most "natural," either historically or for many applications. (Many people would choose something like a median absolute deviation from the median, for instance.)
Why does the SD appear in such an essential way?
Consider the formula for the limiting histogram: who would have expected it to take such a form? It says the logarithm of the probability density is a quadratic function. Why? Is there some intuitive or clear, compelling explanation for this?
I confess I am unable to reach the ultimate goal of supplying answers that are simple enough to meet Srikant's challenging criteria for intuitiveness and simplicity, but I have sketched this background in the hope that others might be inspired to fill in some of the many gaps. I think a good demonstration will ultimately have to rely on an elementary analysis of how values between $\alpha_n = a s_n + m_n$ and $\beta_n = b s_n + m_n$ can arise in forming the sum $x_1 + x_2 + \ldots + x_n$. Going back to the single-box version of the CLT, the case of a symmetric distribution is simpler to handle: its median equals its mean, so there's a 50% chance that $x_i$ will be less than the box's mean and a 50% chance that $x_i$ will be greater than its mean. Moreover, when $n$ is sufficiently large, the positive deviations from the mean ought to compensate for the negative deviations in the mean. (This requires some careful justification, not just hand waving.) Thus we ought primarily to be concerned about counting the numbers of positive and negative deviations and only have a secondary concern about their sizes. (Of all the things I have written here, this might be the most useful at providing some intuition about why the CLT works. Indeed, the technical assumptions needed to make the generalizations of the CLT true essentially are various ways of ruling out the possibility that rare huge deviations will upset the balance enough to prevent the limiting histogram from arising.)
This shows, to some degree anyway, why the first generalization of the CLT does not really uncover anything that was not in de Moivre's original Bernoulli trial version.
At this point it looks like there is nothing for it but to do a little math: we need to count the number of distinct ways in which the number of positive deviations from the mean can differ from the number of negative deviations by any predetermined value $k$, where evidently $k$ is one of $-n, -n+2, \ldots, n-2, n$. But because vanishingly small errors will disappear in the limit, we don't have to count precisely; we only need to approximate the counts. To this end it suffices to know that
$$\text{The number of ways to obtain } k \text{ positive and } n-k \text{ negative values out of } n$$
$$\text{equals } \frac{n-k+1}{k}$$
$$\text{times the number of ways to get } k-1 \text{ positive and } n-k+1 \text { negative values.}$$
(That's a perfectly elementary result so I won't bother to write down the justification.) Now we approximate wholesale. The maximum frequency occurs when $k$ is as close to $n/2$ as possible (also elementary). Let's write $m = n/2$. Then, relative to the maximum frequency, the frequency of $m+j+1$ positive deviations ($j \ge 0$) is estimated by the product
$$\frac{m+1}{m+1} \frac{m}{m+2} \cdots \frac{m-j+1}{m+j+1}$$
$$=\frac{1 - 1/(m+1)}{1 + 1/(m+1)} \frac{1-2/(m+1)}{1+2/(m+1)} \cdots \frac{1-j/(m+1)}{1+j/(m+1)}.$$
135 years before de Moivre was writing, John Napier invented logarithms to simplify multiplication, so let's take advantage of this. Using the approximation
$$\log\left(\frac{1-x}{1+x}\right) = -2x - \frac{2x^3}{3} + O(x^5),$$
we find that the log of the relative frequency is approximately
$$-\frac{2}{m+1}\left(1 + 2 + \cdots + j\right) - \frac{2}{3(m+1)^3}\left(1^3+2^3+\cdots+j^3\right) = -\frac{j^2}{m} + O\left(\frac{j^4}{m^3}\right).$$
Because the error in approximating this sum by $-j^2/m$ is on the order of $j^4/m^3$, the approximation ought to work well provided $j^4$ is small relative to $m^3$. That covers a greater range of values of $j$ than is needed. (It suffices for the approximation to work for $j$ only on the order of $\sqrt{m}$ which asymptotically is much smaller than $m^{3/4}$.)
Consequently, writing $$z = \sqrt{2}\,\frac{j}{\sqrt{m}} = \frac{j/n}{1 / \sqrt{4n}}$$ for the standardized deviation, the relative frequency of deviations of size given by $z$ must be proportional to $\exp(-z^2/2)$ for large $m.$ Thus appears the Gaussian law of #3 above.
Obviously much more analysis of this sort should be presented to justify the other assertions in the CLT, but I'm running out of time, space, and energy and I've probably lost 90% of the people who started reading this anyway. This simple approximation, though, suggests how de Moivre might originally have suspected that there is a universal limiting distribution, that its logarithm is a quadratic function, and that the proper scale factor $s_n$ must be proportional to $\sqrt{n}$ (as shown by the denominator of the preceding formula). It is difficult to imagine how this important quantitative relationship could be explained without invoking some kind of mathematical information and reasoning; anything less would leave the precise shape of the limiting curve a complete mystery.
whuber♦whuber
303k5656 gold badges689689 silver badges11881188 bronze badges
$\begingroup$ +1 It will take me some time to digest your answer. I admit that asking for an intuition for the CLT within the constraints I imposed may be nearly impossible. $\endgroup$
$\begingroup$ Thank you for taking the time to write this, it's the most helpful exposition of the CLT I've seen that is also very accessible mathematically. $\endgroup$
– jeremy radcliff
$\begingroup$ Yes, quite dense.... so many questions. How does the first histogram have 2 bars (there was only 1 trial!) ; can I just ignore that? And the convention is usually to avoid horizontal gaps between bars of a histogram, right? (because , as you say, area is important, and the area will eventually be calculated over a continuous (i.e. no gaps) domain) ? So I'll ignore the gaps, too...? Even I had gaps when I first tried to understand it :) $\endgroup$
– The Red Pea
$\begingroup$ @TheRed Thank you for your questions. I have edited the first part of this post to make these points a little clearer. $\endgroup$
– whuber ♦
$\begingroup$ Ah, yes, I confused "number of trials= $n$ = "observations"" with "number of times (this entire procedure) is repeated". So if a ticket can only have the a value of the two values, 0 or 1, and you only observe one ticket, the sum of those tickets' values can only be one of two things: 0, or 1. Hence your first histogram has two bars. Moreover, these bars are roughly equal in height because we expect 0 and 1 to occur in equal proportions. $\endgroup$
The nicest animation I know: http://www.ms.uky.edu/~mai/java/stat/GaltonMachine.html
The simplest words I have read: http://elonen.iki.fi/articles/centrallimit/index.en.html
If you sum the results of these ten throws, what you get is likely to be closer to 30-40 than the maximum, 60 (all sixes) or on the other hand, the minumum, 10 (all ones).
The reason for this is that you can get the middle values in many more different ways than the extremes. Example: when throwing two dice: 1+6 = 2+5 = 3+4 = 7, but only 1+1 = 2 and only 6+6 = 12.
That is: even though you get any of the six numbers equally likely when throwing one die, the extremes are less probable than middle values in sums of several dice.
The Red Pea
glassyglassy
$\begingroup$ that first diagram (dropping balls) only works because we drop the balls above the middle. if we dropped them from the corner, they'd stack up towards the corner. $\endgroup$
– d-_-b
An observation concerning the CLT may be the following. When you have a sum $$ S = X_1 + X_2 + \ldots + X_n $$ of a lot of random components, if one is "smaller than usual" then this is mostly compensated for by some of the other components being "larger than usual". In other words, negative deviations and positive deviations from the component means cancel each other out in the summation. Personally, I have no clear-cut intuition why exactly the remaining deviations form a distribution that looks more and more normal the more terms you have.
There are many versions of the CLT, some stronger than others, some with relaxed conditions such as a moderate dependence between the terms and/or non-identical distributions for the terms. In the simplest-to-prove versions of the CLT, the proof is usually based on the moment-generating function (or Laplace-Stieltjes transform or some other appropriate transform of the density) of the sum $S$. Writing this as a Taylor expansion and keeping only the most dominant term gives you the moment-generating function of the normal distribution. So for me personally, the normality is something that follows from a bunch of equations and I can not provide any further intuition than that.
It should be noted however that the sum's distribution, never really is normally distributed, nor does the CLT claims that it would be. If $n$ is finite, there is still some distance to the normal distribution and if $n=\infty$ both the mean and the variance are infinite as well. In the latter case you could take the mean of the infinite sum, but then you get a deterministic number without any variance at all, which could hardly be labelled as "normally distributed".
This may pose problems with practical applications of the CLT. Usually, if you are interested in the distribution of $S/n$ close to its center, CLT works fine. However, convergence to the normal is not uniform everywhere and the further you get away from the center, the more terms you need to have a reasonable approximation.
With all the "sanctity" of the Central Limit Theorem in statistics, its limitations are often overlooked all too easily. Below I give two slides from my course making the point that CLT utterly fails in the tails, in any practical use case. Unfortunately, a lot of people specifically use CLT to estimate tail probabilities, knowingly or otherwise.
answered Mar 22, 2015 at 15:02
StijnDeVuystStijnDeVuyst
$\begingroup$ This is great material and wise advice. I cannot upvote it, unfortunately, because the assertions in "This normality is a mathematical artifact and I think it is not useful to search for any deeper truth or intuition behind it" are deeply troubling. They seem to suggest that (1) we shouldn't rely on mathematics to help us theoretically and (2) there is no point to understanding the math in the first place. I hope that other posts in this thread already go a long way towards disproving the second assertion. The first is so self-inconsistent it hardly bears further analysis. $\endgroup$
$\begingroup$ @whuber. You are right, I am out of my league perhaps. I'll edit. $\endgroup$
– StijnDeVuyst
$\begingroup$ Thank you for reconsidering the problematic part, and a big +1 for the rest. $\endgroup$
Intuition is a tricky thing. It's even trickier with theory in our hands tied behind our back.
The CLT is all about sums of tiny, independent disturbances. "Sums" in the sense of the sample mean, "tiny" in the sense of finite variance (of the population), and "disturbances" in the sense of plus/minus around a central (population) value.
For me, the device that appeals most directly to intuition is the quincunx, or 'Galton box', see Wikipedia (for 'bean machine'?) The idea is to roll a tiny little ball down the face of a board adorned by a lattice of equally spaced pins. On its way down the ball diverts right and left (...randomly, independently) and collects at the bottom. Over time, we see a nice bell shaped mound form right before our eyes.
The CLT says the same thing. It is a mathematical description of this phenomenon (more precisely, the quincunx is physical evidence for the normal approximation to the binomial distribution). Loosely speaking, the CLT says that as long as our population is not overly misbehaved (that is, if the tails of the PDF are sufficiently thin), then the sample mean (properly scaled) behaves just like that little ball bouncing down the face of the quincunx: sometimes it falls off to the left, sometimes it falls off to the right, but most of the time it lands right around the middle, in a nice bell shape.
The majesty of the CLT (to me) is that the shape of the underlying population is irrelevant. Shape only plays a role insofar as it delegates the length of time we need to wait (in the sense of sample size).
answered Oct 19, 2010 at 3:35
user1108user1108
This answer hopes to give an intuitive meaning of the central limit theorem, using simple calculus techniques (Taylor expansion of order 3). Here is the outline:
What the CLT says
An intuitive proof of the CLT using simple calculus
Why the normal distribution?
We will mention the normal distribution at the very end; because the fact that the normal distribution eventually comes up does not bear much intuition.
1. What the central limit theorem says? Several versions of the CLT
There are several equivalent versions of the CLT. The textbook statement of the CLT says that for any real $x$ and any sequence of independent random variables $X_1,\cdots,X_n$ with zero-mean and variance 1, \[P\left(\frac{X_1+\cdots+X_n}{\sqrt n} \le x\right) \to_{n\to+\infty} \int_{-\infty}^x \frac{e^{-t^2/2}}{\sqrt{2\pi}} dt.\] To understand on what is universal and intuitive about the CLT, let's forget the limit for a moment. The above statement says that if $X_1.,\ldots,X_n$ and $Z_1,\ldots,Z_n$ are two sequences of independent random variables each with zero-mean and variance 1, then \[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \] for every indicator function $f$ of the form, for some fixed real $x$, \begin{equation} f(t) = \begin{cases} 1 \text{ if } t < x \\ 0 \text{ if } t\ge x.\end{cases} \end{equation} The previous display embodies the fact the limit is the same no matter the particular distributions of $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$, provided that the random variables are independent with mean zero, variance one.
Some other versions of the CLT mentions the class of Lipschtiz functions that are bounded by 1; some other versions of the CLT mentions the class of smooth functions with bounded derivative of order $k$. Consider two sequences $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ as above, and for some function $f$, the convergence result (CONV)
\[E \left[ f\left(\tfrac{X_1+\cdots+X_n}{\sqrt n}\right) \right] - E \left[ f\left(\tfrac{Z_1+\cdots+Z_n}{\sqrt n}\right) \right] \to_{n\to+\infty} 0 \tag{CONV}\]
It is possible to establish the equivalence ("if and only if") between the following statements:
(CONV) above holds for every indicator functions $f$ of the form $f(t)=1$ for $t < x$ and $f(t)=0$ for $t\ge x$ for some fixed real $x$.
(CONV) holds for every bounded lipschitz function $f:R\to R$.
(CONV) holds for every smooth (i.e., $C^{\infty}$) functions with compact support.
(CONV) holds for every functions $f$ three time continuously differentiable with $\sup_{x\in R} |f'''(x)| \le 1$.
Each of the 4 points above says that the convergence holds for a large class of functions. By a technical approximation argument, one can show that the four points above are equivalent, we refer the reader to Chapter 7, page 77 of David Pollard's book A user's guide to measure theoretic probabilities from which this answer is highly inspired.
Our assumption for the remaining of this answer...
We will assume that $\sup_{x\in R} |f'''(x)| \le C$ for some constant $C>0$, which corresponds to point 4 above. We will also assume that the random variables have finite, bounded third moment: $E[|X_i|^3]$ and $E[|Z_i|^3]$ are finite.
2. The value of $E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is universal: it does not depend on the distribution of $X_1,...,X_n$
Let us show that this quantity is universal (up to a small error term), in the sense that it does not depend on which collection of independent random variables was provided. Take $X_1,\ldots,X_n$ and $Z_1,\ldots,Z_n$ two sequences of independent random variables, each with mean 0 and variance 1, and finite third moment.
The idea is to iteratively replace $X_i$ by $Z_i$ in one of the quantity and control the difference by basic calculus (the idea, I believe, is due to Lindeberg). By a Taylor expansion, if $W = Z_1+\cdots+Z_{n-1}$, and $h(x)=f(x/\sqrt n)$ then \begin{align} h(Z_1+\cdots+Z_{n-1}+X_n) &= h(W) + X_n h'(W) + \frac{X_n^2 h''(W)}{2} + \frac{X_n^3/h'''(M_n)}{6} \\ h(Z_1+\cdots+Z_{n-1}+Z_n) &= h(W) + Z_n h'(W) + \frac{Z_n^2 h''(W)}{2} + \frac{Z_n^3 h'''(M_n')}{6} \\ \end{align} where $M_n$ and $M_n'$ are midpoints given by the mean-value theorem. Taking expectation on both lines, the zeroth order term is the same, the first order terms are equal in expectation because by independence of $X_n$ and $W$, $E[X_n h'(W)]= E[X_n] E[h'(W)] =0$ and similarly for the second line. Again by independence, the second order terms are the same in expectation. The only remaining terms are the third order one, and in expectation the difference between the two lines is at most \[ \frac{(C/6)E[ |X_n|^3 + |Z_n|^3 ]}{(\sqrt n)^3}. \] Here $C$ is an upper bound on the third derivative of $f'''$. The denominator $(\sqrt{n})^3$ appears because $h'''(t) = f'''(t/\sqrt n)/(\sqrt n)^3$. By independence, the contribution of $X_n$ in the sum is meaningless because it could be replaced by $Z_n$ without incurring an error larger than the above display!
We now reiterate to replace $X_{n-1}$ by $Z_{n-1}$. If $\tilde W= Z_1+Z_2+\cdots+Z_{n-2} + X_n$ then \begin{align} h(Z_1+\cdots+Z_{n-2}+X_{n-1}+X_n) &= h(\tilde W) + X_{n-1} h'(\tilde W) + \frac{X_{n-1}^2 h''(\tilde W)}{2} + \frac{X_{n-1}^3/h'''(\tilde M_n)}{6}\\ h(Z_1+\cdots+Z_{n-2}+Z_{n-1}+X_n) &= h(\tilde W) + Z_{n-1} h'(\tilde W) + \frac{Z_{n-1}^2 h''(\tilde W)}{2} + \frac{Z_{n-1}^3/h'''(\tilde M_n)}{6}. \end{align} By independence of $Z_{n-1}$ and $\tilde W$, and by independence of $X_{n-1}$ and $\tilde W$, again the zeroth, first and second order terms are equal in expectation for both lines. The difference in expectation between the two lines is again at most \[ \frac{(C/6)E[ |X_{n-1}|^3 + |Z_{n-1}|^3 ]}{(\sqrt n)^3}. \] We keep iterating until we replaced all $Z_i$'s with $X_i$'s. By adding the errors made at each of the $n$ steps, we obtain \[ \Big| E\left[ f\left( \tfrac{X_1+\cdots+X_n}{\sqrt n} \right) \right]-E\left[ f\left( \tfrac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] \Big| \le n \frac{(C/6)\max_{i=1,\ldots,n} E[ |X_i|^3 + |Z_i|^3 ]}{(\sqrt n)^3}. \] as $n$ increases, the right hand side converges to 0 if the third moments of our random variables are finite (let's assume it is the case). This means that the expectations on the left become arbitrarily close to each other, no matter if the distribution of $X_1,\ldots,X_n$ is far from that of $Z_1,\ldots,Z_n$. By independence, the contribution of each $X_i$ in the sum is meaningless because it could be replaced by $Z_i$ without incurring an error larger than $O(1/(\sqrt n)^3)$. And replacing all $X_i$'s by the $Z_i$'s does not change the quantity by more than $O(1/\sqrt n)$.
The expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ is thus universal, it does not depend on the distribution of $X_1,\ldots,X_n$. On the other hand, independence and $E[X_i]=E[Z_i]=0,E[Z_i^2]=E[X_i^2]=1$ was of utmost importance for the above bounds.
3. Why the normal distribution?
We have seen that the expectation $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$ will be the same no matter what the distribution of $X_i$ is, up to a small error of order $O(1/\sqrt n)$.
But for applications, it would be useful to compute such quantity. It would also be useful to get a simpler expression for this quantity $E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right]$.
Since this quantity is the same for any collection $X_1,\ldots,X_n$, we can simply pick one specific collection such that the distribution $(X_1+\cdots+X_n)/\sqrt n$ is easy to compute or easy to remember.
For the normal distribution $N(0,1)$, it happens that this quantity becomes really simple. Indeed, if $Z_1,\ldots,Z_n$ are iid $N(0,1)$ then $\frac{Z_1+\cdots+Z_n}{\sqrt n}$ has also the $N(0,1)$ distribution and it does not depend on $n$! Hence if $Z\sim N(0,1)$, then \[ E\left[ f\left( \frac{Z_1+\cdots+Z_n}{\sqrt n} \right) \right] = E[ f(Z)], \] and by the above argument, for any collection of independent random variables $X_1,\ldots,X_n$ with $E[X_i]=0,E[X_i^2]=1$, then
\[ \left| E\left[ f\left( \frac{X_1+\cdots+X_n}{\sqrt n} \right) \right] -E[f(Z) \right| \le \frac{\sup_{x\in R} |f'''(x)| \max_{i=1,\ldots,n} E[|X_i|^3 + |Z|^3]}{6\sqrt n}. \]
jlewkjlewk
$\begingroup$ You seem to be asserting a law of large numbers rather than the CLT. $\endgroup$
$\begingroup$ I am not sure why you would say this, @whuber. The above give an intuitive proof that $E[f((X_1+...+X_n)/\sqrt n)]$ converges to $E[f(Z)]$ where $Z\sim N(0,1)$ for a large class of functions $f$. This is the CLT. $\endgroup$
– jlewk
$\begingroup$ I see what you mean. What gives me pause is that your assertion concerns only expectations and not distributions, whereas the CLT draws conclusions about a limiting distribution. The equivalence between the two might not immediately be evident to many. Might I suggest, then, that you provide an explicit connection between your statement and the usual statements of the CLT in terms of limiting distributions? (+1 by the way: thank you for elaborating this argument.) $\endgroup$
$\begingroup$ Really great answer. I find this much more intuitive than characteristic function kung-fu. $\endgroup$
– Eric Auld
Why the $\sqrt{n}$ instead of $n$? What's this weird version of an average?
If you have a bunch of perpendicular vectors $x_1, \dotsc, x_n$ of length $\ell$, then $ \frac{x_1 + \dotsb + x_n}{\sqrt{n}}$ is again of length $\ell.$ You have to normalize by $\sqrt{n}$ to keep the sum at the same scale.
There is a deep connection between independent random variables and orthogonal vectors. When random variables are independent, that basically means that they are orthogonal vectors in a vector space of functions.
(The function space I refer to is $L^2$, and the variance of a random variable $X$ is just $\|X - \mu\|_{L^2}^2$. So no wonder the variance is additive over independent random variables. Just like $\|x + y\|^2 = \|x\|^2 + \|y\|^2$ when $x \perp y$.)**
One thing that really confused me for a while, and which I think lies at the heart of the matter, is the following question:
Why is it that the sum $\frac{X_1 + \dotsb + X_n} {\sqrt{n}}$ ($n$ large) doesn't care anything about the $X_i$ except their mean and their variance? (Moments 1 and 2.)
This is similar to the law of large numbers phenomenon:
$\frac{X_1 + \dotsb + X_n} {n}$ ($n$ large) only cares about moment 1 (the mean).
(Both of these have their hypotheses that I'm suppressing (see the footnote), but the most important thing, of course, is that the $X_i$ be independent.)
A more elucidating way to express this phenomenon is: in the sum $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$, I can replace any or all of the $X_i$ with some other RV's, mixing and matching between all kinds of various distributions, as long as they have the same first and second moments. And it won't matter as long as $n$ is large, relative to the moments.
If we understand why that's true, then we understand the central limit theorem. Because then we may as well take $X_i$ to be normal with the same first and second moment, and in that case we know $\frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ is just normal again for any $n$, including super-large $n$. Because the normal distribution has the special property ("stability") that you can add two independent normals together and get another normal. Voila.
The explanation of the first-and-second-moment phenomemonon is ultimately just some arithmetic. There are several lenses through which once can choose to view this arithmetic. The most common one people use is the fourier transform (AKA characteristic function), which has the feel of "I follow the steps, but how and why would anyone ever think of that?" Another approach is to look at the cumulants of $X_i$. There we find that the normal distribution is the unique distribution whose higher cumulants vanish, and dividing by $\sqrt{n}$ tends to kill all but the first two cumulants as $n$ gets large.
I'll show here a more elementary approach. As the sum $Z_n \overset{\text{(def)}}{=} \frac{X_1 + \dotsb + X_n}{\sqrt{n}}$ gets longer and longer, I'll show that all of the moments of $Z_n$ are functions only of the variances $\operatorname{Var}(X_i)$ and the means $\mathbb{E}X_i$, and nothing else. Now the moments of $Z_n$ determine the distribution of $Z_n$ (that's true not just for long independent sums, but for any nice distribution, by the Carleman continuity theorem). To restate, we're claiming that as $n$ gets large, $Z_n$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. And to show that, we're going to show that $\mathbb{E}((Z_n - \mathbb{E}Z_n)^k)$ depends only on the $\mathbb{E}X_i$ and the $\operatorname{Var}X_i$. That suffices, by the Carleman continuity theorem.
For convenience, let's require that the $X_i$ have mean zero and variance $\sigma^2$. Assume all their moments exist and are uniformly bounded. (But nevertheless, the $X_i$ can be all different independent distributions.)
Claim: Under the stated assumptions, the $k$th moment $$\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$$ has a limit as $n \to \infty$, and that limit is a function only of $\sigma^2$. (It disregards all other information.)
(Specifically, the values of those limits of moments are just the moments of the normal distribution $\mathcal{N}(0, \sigma^2)$: zero for $k$ odd, and $|\sigma|^k \frac{k!}{(k/2)!2^{k/2}}$ when $k$ is even. This is equation (1) below.)
Proof: Consider $\mathbb{E} \left[ \left(\frac{X_1 + \dotsb + X_n}{\sqrt{n}}\right)^k \right]$. When you expand it, you get a factor of $n^{-k/2}$ times a big fat multinomial sum.
$$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k} \binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ $$\alpha_1 + \dotsb + \alpha_n = k$$ $$(\alpha_i \geq 0)$$
(Remember you can distribute the expectation over independent random variables. $\mathbb{E}(X^a Y^b) = \mathbb{E}(X^a)\mathbb{E}(Y^b)$.)
Now if ever I have as one of my factors a plain old $\mathbb{E}(X_i)$, with exponent $\alpha_i =1$, then that whole term is zero, because $\mathbb{E}(X_i) = 0$ by assumption. So I need all the exponents $\alpha_i \neq 1$ in order for that term to survive. That pushes me toward using fewer of the $X_i$ in each term, because each term has $\sum \alpha_i = k$, and I have to have each $\alpha_i >1$ if it is $>0$. In fact, some simple arithmetic shows that at most $k/2$ of the $\alpha_i$ can be nonzero, and that's only when $k$ is even, and when I use only twos and zeros as my $\alpha_i$.
This pattern where I use only twos and zeros turns out to be very important...in fact, any term where I don't do that will vanish as the sum grows larger.
Lemma: The sum $$n^{-k/2} \sum_{|\boldsymbol{\alpha}| = k}\binom{k}{\alpha_1, \dotsc, \alpha_n}\prod_{i=1}^n \mathbb{E}(X_i^{\alpha_i})$$ breaks up like $$n^{-k/2} \left( \underbrace{\left( \text{terms where some } \alpha_i = 1 \right)}_{\text{These are zero because $\mathbb{E}X_i = 0$}} + \underbrace{\left( \text{terms where }\alpha_i\text{'s are twos and zeros}\right)}_{\text{This part is } O(n^{k/2}) \text{ if $k$ is even, otherwise no such terms}} + \underbrace{\left( \text{rest of terms}\right)}_{o(n^{k/2})} \right)$$
In other words, in the limit, all terms become irrelevant except
$$ n^{-k/2}\sum\limits_{\binom{n}{k/2}} \underbrace{\binom{k}{2,\dotsc, 2}}_{k/2 \text{ twos}} \prod\limits_{j=1}^{k/2}\mathbb{E}(X_{i_j}^2) \tag{1}$$
Proof: The main points are to split up the sum by which (strong) composition of $k$ is represented by the multinomial $\boldsymbol{\alpha}$. There are only $2^{k-1}$ possibilities for strong compositions of $k$, so the number of those can't explode as $n \to \infty$. Then there is the choice of which of the $X_1, \dotsc, X_n$ will receive the positive exponents, and the number of such choices is $\binom{n}{\text{# positive terms in }\boldsymbol{\alpha}} = O(n^{\text{# positive terms in }\boldsymbol{\alpha}})$. (Remember the number of positive terms in $\boldsymbol{\alpha}$ can't be bigger than $k/2$ without killing the term.) That's basically it. You can find a more thorough description here on my website, or in section 2.2.3 of Tao's Topics in Random Matrix Theory, where I first read this argument.
And that concludes the whole proof. We've shown that all moments of $\frac{X_1 + … + X_n}{\sqrt{n}}$ forget everything but $\mathbb{E}X_i$ and $\mathbb{E}(X_i^2)$ as $n \to \infty$. And therefore swapping out the $X_i$ with any variables with the same first and second moments wouldn't have made any difference in the limit. And so we may as well have taken them to be $\sim \mathcal{N}(\mu, \sigma^2)$ to begin with; it wouldn't have made any difference.
**(If one wants to pursue more deeply the question of why $n^{1/2}$ is the magic number here for vectors and for functions, and why the variance (square $L^2$ norm) is the important statistic, one might read about why $L^2$ is the only $L^p$ space that can be an inner product space. Because $2$ is the only number that is its own Holder conjugate.)
Another valid view is that $n^{1/2}$ is not the only denominator can appear. There are different "basins of attraction" for random variables, and so there are infinitely many central limit theorems. There are random variables for which $\frac{X_1 + \dotsb + X_n}{n} \Rightarrow X$, and for which $\frac{X_1 + \dotsb + X_n}{1} \Rightarrow X$! But these random variables necessarily have infinite variance. These are called "stable laws".
It's also enlightening to look at the normal distribution from a calculus of variations standpoint: the normal distribution $\mathcal{N}(\mu, \sigma^2)$ maximizes the Shannon entropy among distributions with a given mean and variance, and which are absolutely continuous with respect to the Lebesgue measure on $\mathbb{R}$ (or $\mathbb{R}^d$, for the multivariate case). This is proven here, for example.
Eric AuldEric Auld
$\begingroup$ +1 I like this for its insight and elementary nature. In spirit it looks the same as the answer by jlewk. $\endgroup$
$\begingroup$ This is lovely. Thank you. Can you elaborate on why the sum of "rest of terms" is $o(n^{k/2})$? I didn't quite follow that step. $\endgroup$
– D.W.
I gave up on trying to come up with an intuitive version and came up with some simulations. I have one that presents a simulation of a Quincunx and some others that do things like show how even a skewed raw reaction time distribution will become normal if you collect enough RT's per subject. I think they help but they're new in my class this year and I haven't graded the first test yet.
One thing that I thought was good was being able to show the law of large numbers as well. I could show how variable things are with small sample sizes and then show how they stabilize with large ones. I do a bunch of other large number demos as well. I can show the interaction in the Quincunx between the numbers of random processes and the numbers of samples.
(turns out not being able to use a chalk or white board in my class may have been a blessing)
JohnJohn
$\begingroup$ Hi John: nice to see you back with this post after almost nine years! It would be interesting to read about the experiences you have had in the meantime with your use of simulations to teach the idea of the CLT and the LLNs. $\endgroup$
$\begingroup$ I stopped teaching that class a year later but the subsequent instructor picked up on the simulation idea. In fact, he carries it much farther and has developed a sequence of shiny apps and has students play with simulations for loads of things in the 250 person class. As near as I can tell from teaching the upper class the students seem to get a lot out of it. The difference between his students and those from equivalent feeder classes is noticeable. (but , of course, there are lots of uncontrolled variables there) $\endgroup$
Jul 2, 2019 at 9:59
$\begingroup$ Thank you, John. It is so unusual to get even anecdotal feedback about lasting student performance after a class has finished that I find even this limited information of interest. $\endgroup$
Jul 2, 2019 at 13:34
What follows is perhaps the most intuitive explanation I have come across for the CLT.
Consider a standard six-sided die. Every time you roll that die, an integer value results between 1 and 6, with equal probability. So, if you were to roll that die many, many times and then plot the frequency with which the different values occur, you will see a flat line; all six values arise with equal frequency.
Now, what happens when you roll a pair of dice and add them together? If you roll the pair of dice, integer values from 2 through 12 will result. If you were to roll the pair of dice many, many times and record their sum, what will the resulting distribution look like? You will not find a flat distribution; you will find that the distribution is peaked in the middle. Why? While only one combination of values yields a 2 (1 and 1), and only one combination yields a 12 (6 and 6), multiple combinations can yield a 7 (5 and 2, 2 and 5, 3 and 4, or 4 and 3). Note: if you have ever played Settlers of Catan, this may be familiar to you! This is why the 6 and 8 tiles are more desirable than the 2 or 12 tiles; the 6s and 8s occur more often.
This concept only amplifies as you add more die to the summation. That is, as you increase the number of random variables that enter your sum, the distribution of resulting values across trials will grow increasingly peaked in the middle. And, this property is not tied to the uniform distribution of a die; the same result will occur if you sum random variables drawn from any underlying distribution.
Gord BGord B
$\begingroup$ This comes down to a series of assertions beginning with "as you increase the number of random variables that enter your sum, the distribution of resulting values across trials will grow increasingly peaked in the middle." How do you demonstrate that? How do you show there aren't multiple peaks when the original distribution is not uniform? What can you demonstrate intuitively about how the spread of the distribution grows? Why does the same limiting distribution appear in the limit, no matter what distribution you start with? $\endgroup$
$\begingroup$ @whuber My goal here was intuition, as OP requested. The logic can be evaluated numerically. If a particular value arises with probability 1/6 in a single roll, then the probability of getting that same value twice will be 1/6*1/6, etc. As there are relatively fewer combinations of values that yield sums in the tails, the tails will arise with decreasing probability as die are added to the set. The same logic holds with a loaded die, i.e., any distribution (you can see this numerically in a simulation): github.com/gburtch/simple_CLT_sim/blob/main/r_simulation.R. $\endgroup$
– Gord B
$\begingroup$ Here is a Python gist producing a similar plot. $\endgroup$
– Galen
Why does the Central Limit Theorem "work better" for larger sample sizes, while keeping the number of samples constant?
How does the Central Limit Theorem show that the Binomial Distribution is approximately Normal for a large value of n?
Central Limit Theorem - intuitive explanation without deep math
Central Limit Theorem
Is the mean of n independent Bernoulli random variables normal?
The central limit theorem, What it means
What does "properly normalized" mean in CLT?
normalization coefficient in the central limit theorem
Central limit theorem on distributions with support other than $\mathbb{R}$
What does the y-axis of a normal distribution represent?
See more linked questions
Central limit theorem and the law of large numbers
Central limit theorem question
Central limit theorem and convergence theory
Understanding the Central Limit Theorem (CLT)
Clarification - Central limit theorem using sample means
Normally distributed errors and the central limit theorem
Confusion about using box models to explain the central limit theorem
Proof of multivariate central limit theorem
What is the role, if any, of the Central Limit Theorem in Bayesian Inference?
|
CommonCrawl
|
Sample records for newton famously penned
Newton and scholastic philosophy.
Levitin, Dmitri
This article examines Isaac Newton's engagement with scholastic natural philosophy. In doing so, it makes two major historiographical interventions. First of all, the recent claim that Newton's use of the concepts of analysis and synthesis was derived from the Aristotelian regressus tradition is challenged on the basis of bibliographical, palaeographical and intellectual evidence. Consequently, a new, contextual explanation is offered for Newton's use of these concepts. Second, it will be shown that some of Newton's most famous pronouncements - from the General Scholium appended to the second edition of the Principia (1713) and from elsewhere - are simply incomprehensible without an understanding of specific scholastic terminology and its later reception, and that this impacts in quite significant ways on how we understand Newton's natural philosophy more generally. Contrary to the recent historiographical near-consensus, Newton did not hold an elaborate metaphysics, and his seemingly 'metaphysical' statements were in fact anti-scholastic polemical salvoes. The whole investigation will permit us a brief reconsideration of the relationship between the self-proclaimed 'new' natural philosophy and its scholastic predecessors.
Newton's Path to Universal Gravitation: The Role of the Pendulum
Boulos, Pierre J.
Much attention has been given to Newton's argument for Universal Gravitation in Book III of the "Principia". Newton brings an impressive array of phenomena, along with the three laws of motion, and his rules for reasoning to deduce Universal Gravitation. At the centre of this argument is the famous "moon test". Here it is the empirical evidence…
Isaac Newton: Eighteenth-century Perspectives
Hall, A. Rupert
This new product of the ever-flourishing Newton industry seems a bit far-fetched at first sight: who but a few specialists would be interested in the historiography of Newton biography in the eighteenth century? On closer inspection, this book by one of the most important Newton scholars of our day turns out to be of interest to a wider audience as well. It contains several biographical sketches of Newton, written in the decades after his death. The two most important ones are the Eloge by the French mathematician Bernard de Fontenelle and the Italian scholar Paolo Frisi's Elogio. The latter piece was hitherto unavailable in English translation. Both articles are well-written, interesting and sometimes even entertaining. They give us new insights into the way Newton was revered throughout Europe and how not even the slightest blemish on his personality or work could be tolerated. An example is the way in which Newton's famous controversy with Leibniz is treated: Newton is without hesitation presented as the wronged party. Hall has provided very useful historical introductions to the memoirs as well as footnotes where needed. Among the other articles discussed is a well-known memoir by John Conduitt, who was married to Newton's niece. This memoir, substantial parts of which are included in this volume, has been a major source of personal information for Newton biographers up to this day. In a concluding chapter, Hall gives a very interesting overview of the later history of Newton biography, in which he describes the gradual change from adoration to a more critical approach in Newton's various biographers. In short, this is a very useful addition to the existing biographical literature on Newton. A J Kox
Catch a falling apple: Isaac Newton and myths of genius.
Fara, P
Newton has become a legendary figure belonging to the distant past rather than a historical person who lived at a specific time. Historians and scientists have constantly reinterpreted many anecdotal tales describing Newton's achievements and behaviour, but the most famous concerns the falling apple in his country garden. Newton's apple conjures up multiple allegorical resonances, and examining its historical accuracy is less important than uncovering the mythical truths embedded within this symbol. Because interest groups fashion different collective versions of the past, analysing mythical tales can reveal fundamental yet conflicting attitudes towards science and its practices.
Pen- Name in Persian and Arabic Poetry
Ebrahim Khodayar
Full Text Available Abstract Pen-name (Takhalloss is one of the main features of Persian poetry. It has been a matter of concern among many of Persian language geography poets in the orient at least up to the Mashrouteh era. Pen-name has been promoted among the other Muslim nations throuph Persian poetry. Although it is not as famous in the Arab nations as in the Persian speaking nations, it is known as "Alqab-o-shoara� among the Arab nations and, through this way, it has affected the poetrical wealth of the Arabic poets. The Present paper, using description-analystic approach, compares the pen-names of Persian and Arabic poets under the title of "pen-names� and investigates their features in both cultures. The main research question is: What are the similarities and differences of poetic-names, in Persian and Arabic poets in terms of the type of name, position and importance? The results showed that Pseudonym by its amazing expansion in Persian poetry has also influenced Arabic poetry. In addition to the factors affecting in the choice of pen-names (like pseudonym, pen-name, nickname..., sometimes such external factors as events, commends, community benefactors and climate, as well as internal factors including the poets' inner beliefs are associated too. .
Newton shows the light: a commentary on Newton (1672) 'A letter … containing his new theory about light and colours…'.
Fara, Patricia
Isaac Newton's reputation was initially established by his 1672 paper on the refraction of light through a prism; this is now seen as a ground-breaking account and the foundation of modern optics. In it, he claimed to refute Cartesian ideas of light modification by definitively demonstrating that the refrangibility of a ray is linked to its colour, hence arguing that colour is an intrinsic property of light and does not arise from passing through a medium. Newton's later significance as a world-famous scientific genius and the apparent confirmation of his experimental results have tended to obscure the realities of his reception at the time. This paper explores the rhetorical strategies Newton deployed to convince his audience that his conclusions were certain and unchallengeable. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society.
Space and motion in nature and Scripture: Galileo, Descartes, Newton.
Janiak, Andrew
In the Scholium to the Definitions in Principia mathematica, Newton departs from his main task of discussing space, time and motion by suddenly mentioning the proper method for interpreting Scripture. This is surprising, and it has long been ignored by scholars. In this paper, I argue that the Scripture passage in the Scholium is actually far from incidental: it reflects Newton's substantive concern, one evident in correspondence and manuscripts from the 1680s, that any general understanding of space, time and motion must enable readers to recognize the veracity of Biblical claims about natural phenomena, including the motion of the earth. This substantive concern sheds new light on an aspect of Newton's project in the Scholium. It also underscores Newton's originality in dealing with the famous problem of reconciling theological and philosophical conceptions of nature in the seventeenth century. Copyright © 2015 Elsevier Ltd. All rights reserved.
Newton shows the light: a commentary on Newton (1672) 'A letter … containing his new theory about light and colours…'
Isaac Newton's reputation was initially established by his 1672 paper on the refraction of light through a prism; this is now seen as a ground-breaking account and the foundation of modern optics. In it, he claimed to refute Cartesian ideas of light modification by definitively demonstrating that the refrangibility of a ray is linked to its colour, hence arguing that colour is an intrinsic property of light and does not arise from passing through a medium. Newton's later significance as a world-famous scientific genius and the apparent confirmation of his experimental results have tended to obscure the realities of his reception at the time. This paper explores the rhetorical strategies Newton deployed to convince his audience that his conclusions were certain and unchallengeable. This commentary was written to celebrate the 350th anniversary of the journal Philosophical Transactions of the Royal Society. PMID:25750143
Bohlin transformation: the hidden symmetry that connects Hooke to Newton
Saggio, Maria Luisa
Hooke's name is familiar to students of mechanics thanks to the law of force that bears his name. Less well-known is the influence his findings had on the founder of mechanics, Isaac Newton. In a lecture given some twenty years ago, W Arnol'd pointed out the outstanding contribution to science made by Hooke, and also noted the controversial issue of the attribution of important discoveries to Newton that were actually inspired by Hooke. It therefore seems ironic that the two most famous force laws, named after Hooke and Newton, are two geometrical aspects of the same law. This relationship, together with other illuminating aspects of Newtonian mechanics, is described in Arnol'd's book and is worth remembering in standard physics courses. In this didactical paper the duality of the two forces is expounded and an account of the more recent contributions to the subject is given. (paper)
Judaism in the theology of Sir Isaac Newton
Goldish, Matt
This book is based on my doctoral dissertation from the Hebrew University of Jerusalem (1996) of the same title. As a master's student, working on an entirely different project, I was well aware that many of Newton's theological manuscripts were located in our own Jewish National and University Library, but I was under the mistaken assumption that scores of highly qualified scholars must be assiduously scouring them and publishing their results. It never occurred to me to look at them at all until, having fmished my master's, I spoke to Professor David Katz at Tel-Aviv University about an idea I had for doctoral research. Professor Katz informed me that the project I had suggested was one which he himself had just fmished, but that I might be interested in working on the famous Newton manuscripts in the context of a project being organized by him, Richard Popkin, James Force, and the late Betty Jo Teeter Dobbs, to study and publish Newton's theological material. I asked him whether he was not sending me into ...
Conference | From Newton to Hawking and beyond | 28 May
CERN Multimedia
From Newton to Hawking and beyond: Why disability equality is relevant to the world of particle physics, Dr Tom Shakespeare. Tuesday, 28 May 2013 - 11.30 am - 1 pm Main Auditorium – Room 500-1-001 Conference organised by the CERN Diversity Programme English with French interpretation According to the recent world report on disability, 15% of the world's population is disabled. Among that group could be numbered famous physicists such as Isaac Newton and Paul Dirac, neither of whom could be classed as "neuro-typical�, and Stephen Hawking. This presentation will provide some basic data about global disability, and the socially imposed barriers which disabled people face. It will also include some stories about high achieving people with disabilities. Finally, some practical suggestions will be offered on how to respect and include people with disabilities in the workplace. Tom Shakespeare is a social sci...
Programming for the Newton software development with NewtonScript
McKeehan, Julie
Programming for the Newton: Software Development with NewtonScript focuses on the processes, approaches, operations, and principles involved in software development with NewtonScript.The publication first elaborates on Newton application design, views on the Newton, and protos. Discussions focus on system protos, creating and using user protos, linking and naming templates, creating the views of WaiterHelper, Newton application designs, and life cycle of an application. The text then elaborates on the fundamentals of NewtonScript, inheritance in NewtonScript, and view system and messages. Topi
Is mind-mindedness trait-like or a quality of close relationships? Evidence from descriptions of significant others, famous people, and works of art.
Meins, Elizabeth; Fernyhough, Charles; Harris-Waller, Jayne
The four studies reported here sought to explore the nature of the construct of mind-mindedness. In Study 1, involving 37 mothers of 5- to 8-year-old children, mothers' verbal mind-minded descriptions of their children were positively correlated with their mind-minded descriptions of their current romantic partner. Participants in Studies 2 (N=114), 3 (N=173), and 4 (N=153) were young adults who provided written descriptions of: a close friend and their current romantic partner (Study 2); two specified famous people, two works of art, and a close friend (Study 3); a specified famous person, a famous person of the participant's choice, and a close friend (Study 4). Study 2 obtained paper-and-pen written descriptions, whereas participants completed descriptions in electronic format in Studies 3 and 4. Mind-minded descriptions of friends and partners were positively correlated, but there was no relation between mind-minded descriptions of a friend and the tendency to describe famous people or works of art in mind-minded terms. Levels of mind-mindedness were higher in descriptions of friends compared with descriptions of famous people or works of art. Administration format was unrelated to individuals' mind-mindedness scores. The results suggest that mind-mindedness is a facet of personal relationships rather than a trait-like quality. Copyright © 2013 Elsevier B.V. All rights reserved.
Alquimia: Isaac Newton revisitado Alchemy: Isaac Newton Revisited
Reginaldo Carmello Corrêa de Moraes
Full Text Available Nota sobre publicações recentes que revelam aspectos pouco conhecidos da biblioteca de Newton - os numerosos textos religiosos, místicos e herméticos. Os biógrafos de Newton resistiram muito até admitir que os escritos esotéricos fossem genuíno interesse do sábio e que tivessem importância para entender sua trajetória intelectual. As publicações aqui indicadas afirmam o contrário, seguindo trilha aberta por ensaio pioneiro de J. M. Keynes (1946.A note on recent books about an unexplored side of Newton's library: religious, mystical and hermetic texts. Newton's biographers had resisted so much to believe that esoteric writings were in Newton's field of interest. Even if they recognized that, they didn't believe those strange works were important elements to understand his intellectual trajectory. The studies we mention here are saying just the opposite thing, exploring the way opened by the pioneer essay of J. M. Keynes (1946.
Mechanics and Newton-Cartan-like gravity on the Newton-Hooke space-time
Tian Yu; Guo Hanying; Huang Chaoguang; Xu Zhan; Zhou Bin
We focus on the dynamical aspects on Newton-Hooke space-time NH + mainly from the viewpoint of geometric contraction of the de Sitter spacetime with Beltrami metric. (The term spacetime is used to denote a space with non-degenerate metric, while the term space-time is used to denote a space with degenerate metric.) We first discuss the Newton-Hooke classical mechanics, especially the continuous medium mechanics, in this framework. Then, we establish a consistent theory of gravity on the Newton-Hooke space-time as a kind of Newton-Cartan-like theory, parallel to the Newton's gravity in the Galilei space-time. Finally, we give the Newton-Hooke invariant Schroedinger equation from the geometric contraction, where we can relate the conservative probability in some sense to the mass density in the Newton-Hooke continuous medium mechanics. Similar consideration may apply to the Newton-Hooke space-time NH - contracted from anti-de Sitter spacetime
Fractal aspects and convergence of Newton`s method
Drexler, M. [Oxford Univ. Computing Lab. (United Kingdom)
Newton`s Method is a widely established iterative algorithm for solving non-linear systems. Its appeal lies in its great simplicity, easy generalization to multiple dimensions and a quadratic local convergence rate. Despite these features, little is known about its global behavior. In this paper, we will explain a seemingly random global convergence pattern using fractal concepts and show that the behavior of the residual is entirely explicable. We will also establish quantitative results for the convergence rates. Knowing the mechanism of fractal generation, we present a stabilization to the orthodox Newton method that remedies the fractal behavior and improves convergence.
A combined modification of Newton`s method for systems of nonlinear equations
Monteiro, M.T.; Fernandes, E.M.G.P. [Universidade do Minho, Braga (Portugal)
To improve the performance of Newton`s method for the solution of systems of nonlinear equations a modification to the Newton iteration is implemented. The modified step is taken as a linear combination of Newton step and steepest descent directions. In the paper we describe how the coefficients of the combination can be generated to make effective use of the two component steps. Numerical results that show the usefulness of the combined modification are presented.
Newton's gift how Sir Isaac Newton unlocked the system of the world
Berlinski, David
Sir Isaac Newton, creator of the first and perhaps most important scientific theory, is a giant of the scientific era. Despite this, he has remained inaccessible to most modern readers, indisputably great but undeniably remote. In this witty, engaging, and often moving examination of Newton's life, David Berlinski recovers the man behind the mathematical breakthroughs. The story carries the reader from Newton's unremarkable childhood to his awkward undergraduate days at Cambridge through the astonishing year in which, working alone, he laid the foundation for his system of the world, his Principia Mathematica, and to the subsequent monumental feuds that poisoned his soul and wearied his supporters. An edifying appreciation of Newton's greatest accomplishment, Newton's Gift is also a touching celebration of a transcendent man.
Newton's apple Isaac Newton and the English scientific renaissance
Aughton, Peter
In the aftermath of the English Civil War, the Restoration overturned England's medieval outlook and a new way of looking at the world allowed the genius of Isaac Newton (b. 1642) and his contemporaries to flourish. Newton had a long and eventful life apart from his scentific discoveries. He was born at the beginnings of the Civil War, his studies were disrupted by the twin disasters of the Great Plague and the Fire of London; a brilliant and enigmatic genius, Newton dabbled in alchemy, wrote over a million words on the Bible, quarrelled with his contemporaries and spent his last years as Master of the Royal Mint as well as President of the Royal Society. This book sets Newton's life and work against this dramatic intellectual rebirth; among his friends and contemporaries were Samuel Pepys, the colourful diarist, John Evelyn, the eccentric antiquarian, the astronomers Edmund Halley and John Flamsteed, and Christopher Wren, the greatest architect of his age. They were all instrumental in the founding of the Ro...
Lojasiewicz exponents and Newton polyhedra
Pham Tien Son
In this paper we obtain the exact value of the Lojasiewicz exponent at the origin of analytic map germs on K n (K = R or C under the Newton non-degeneracy condition, using information from their Newton polyhedra. We also give some conclusions on Newton non-degenerate analytic map germs. As a consequence, we obtain a link between Newton non-degenerate ideals and their integral closures, thus leading to a simple proof of a result of Saia. Similar results are also considered to polynomial maps which are Newton non-degenerate at infinity. (author)
Famous face recognition, face matching, and extraversion.
Lander, Karen; Poyarekar, Siddhi
It has been previously established that extraverts who are skilled at interpersonal interaction perform significantly better than introverts on a face-specific recognition memory task. In our experiment we further investigate the relationship between extraversion and face recognition, focusing on famous face recognition and face matching. Results indicate that more extraverted individuals perform significantly better on an upright famous face recognition task and show significantly larger face inversion effects. However, our results did not find an effect of extraversion on face matching or inverted famous face recognition.
Newton's Apple
Hendry, Archibald W.
Isaac Newton may have seen an apple fall, but it was Robert Hooke who had a better idea of where it would land. No one really knows whether or not Isaac Newton actually saw an apple fall in his garden. Supposedly it took place in 1666, but it was a tale he told in his old age more than 60 years later, a time when his memory was failing and his…
The Enigma of Newton
Nunan, E.
Presents a brief biography of Sir Isaac Newton, lists contemporary scientists and scientific developments and discusses Newton's optical research and conceptual position concerning the nature of light. (JR)
Sometimes "Newton's Method" Always "Cycles"
Latulippe, Joe; Switkes, Jennifer
Are there functions for which Newton's method cycles for all non-trivial initial guesses? We construct and solve a differential equation whose solution is a real-valued function that two-cycles under Newton iteration. Higher-order cycles of Newton's method iterates are explored in the complex plane using complex powers of "x." We find a class of…
An evaluation of prefilled insulin pens: a focus on the Next Generation FlexPen®
Estella M Davis
Full Text Available Estella M Davis, Emily L Sexson, Mikayla L Spangler, Pamela A ForalDepartment of Pharmacy Practice, Creighton University School of Pharmacy and Health Professions, Omaha, Nebraska, USAAbstract: Insulin pen delivery systems are preferred by patients over the traditional vial and syringe method for insulin delivery because they are simple and easy to use, improve confidence in dosing insulin, and have less interference with activities and improved discretion with use. Insulin manufacturers have made numerous improvements to their first marketed pen devices and are now introducing their next generation of devices. Design modifications to the newest generation of prefilled insulin pen devices are intended to improve the ease of use and safety and continue to positively impact adherence to insulin. This review focuses on the Next Generation FlexPen® with regard to design considerations to reduce injection force, improve accuracy and ease of use, and evaluate the preference of patient and health-care provider compared with other disposable, prefilled insulin pen devices.Keywords: diabetes, dose accuracy, injection force, patient preference, insulin pen device
The Newton papers the strange and true odyssey of Isaac Newton's manuscripts
Dry, Sarah
When Isaac Newton died at 85 without a will on March 20, 1727, he left a mass of disorganized papers-upwards of 8 million words-that presented an immediate challenge to his heirs. Most of these writings, on subjects ranging from secret alchemical formulas to impassioned rejections of the Holy Trinity to notes and calculations on his core discoveries in calculus, universal gravitation, and optics, were summarily dismissed by his heirs as "not fit to be printed." Rabidly heretical, alchemically obsessed, and possibly even mad, the Newton presented in these papers threatened to undermine not just his personal reputation but the status of science itself. As a result, the private papers of the world's greatest scientist remained hidden to all but a select few for over two hundred years. In The Newton Papers, Sarah Dry divulges the story of how this secret archive finally came to light-and the complex and contradictory man it revealed. Covering a broad swath of history, Dry explores who controlled Newton's legacy, ...
Black Hole Results from XMM-Newton
Norbert Schartel
Full Text Available XMM-Newton is one of the most successful science missions of the European Space Agency. Since 2003 every year about 300 articles are published in refereed journals making directly use of XMM-Newton data. All XMM-Newton calls for observing proposals are highly oversubscribed by factors of six and more. In the following some scientic highlights of XMM-Newton observations of black holes are summarized.
Isaac Newton: Man, Myth, and Mathematics.
Rickey, V. Frederick
This article was written in part to celebrate the anniversaries of landmark mathematical works by Newton and Descartes. It's other purpose is to dispel some myths about Sir Isaac Newton and to encourage readers to read Newton's works. (PK)
From Newton to Einstein.
Ryder, L. H.
Discusses the history of scientific thought in terms of the theories of inertia and absolute space, relativity and gravitation. Describes how Sir Isaac Newton used the work of earlier scholars in his theories and how Albert Einstein used Newton's theories in his. (CW)
Newton-Cartan gravity revisited
Andringa, Roel
In this research Newton's old theory of gravity is rederived using an algebraic approach known as the gauging procedure. The resulting theory is Newton's theory in the mathematical language of Einstein's General Relativity theory, in which gravity is spacetime curvature. The gauging procedure sheds
Isaac Newton pocket giants
May, Andrew
Isaac Newton had an extraordinary idea. He believed the physical universe and everything in it could be described in exact detail using mathematical relationships. He formulated a law of gravity that explained why objects fall downwards, how the moon causes the tides, and why planets and comets orbit the sun. While Newton's work has been added to over the years, his basic approach remains at the heart of the scientific worldview. Yet Newton's own had little in common with that of a modern scientist. He believed the universe was created to a precise and rational design - a design that was fully
Some Peculiarities of Newton-Hooke Space-Times
Tian, Yu
Newton-Hooke space-times are the non-relativistic limit of (anti-)de Sitter space-times. We investigate some peculiar facts about the Newton-Hooke space-times, among which the "extraordinary Newton-Hooke quantum mechanics" and the "anomalous Newton-Hooke space-times" are discussed in detail. Analysis on the Lagrangian/action formalism is performed in the discussion of the Newton-Hooke quantum mechanics, where the path integral point of view plays an important role, and the physically measurab...
The illusion of fame: how the nonfamous become famous.
Landau, Joshua D; Leed, Stacey A
This article reports 2 experiments in which nonfamous faces were paired with famous (e.g., Oprah Winfrey) or semifamous (e.g., Annika Sorenstam) faces during an initial orienting task. In Experiment 1, the orienting task directed participants to consider the relationship between the paired faces. In Experiment 2, participants considered distinctive qualities of the paired faces. Participants then judged the fame level of old and new nonfamous faces, semifamous faces, and famous faces. Pairing a nonfamous face with a famous face resulted in a higher fame rating than pairing a nonfamous face with a semifamous face. The fame attached to the famous people was misattributed to their nonfamous partners. We discuss this pattern of results in the context of current theoretical explanations of familiarity misattributions.
NEWTON'S SECOND LAW OF MOTION, F=MA; EULER'S OR NEWTON'S?
Ajay Sharma
Objective: F =ma is taught as Newton's second law of motion all over the world. But it was given by Euler in 1775, forty-eight years after the death of Newton. It is debated here with scientific logic. Methods/Statistical analysis: The discussion partially deals with history of science so various aspects are quoted from original references. Newton did not give any equation in the Principia for second, third laws motion and law of gravitation. Conceptually, in Newton's time, neither accele...
Vegetation survey of PEN Branch wetlands
A survey was conducted of vegetation along Pen Branch Creek at Savannah River Site (SRS) in support of K-Reactor restart. Plants were identified to species by overstory, understory, shrub, and groundcover strata. Abundance was also characterized and richness and diversity calculated. Based on woody species basal area, the Pen Branch delta was the most impacted, followed by the sections between the reactor and the delta. Species richness for shrub and groundcover strata were also lowest in the delta. No endangered plant species were found. Three upland pine areas were also sampled. In support of K Reactor restart, this report summarizes a study of the wetland vegetation along Pen Branch. Reactor effluent enters Indian Grove Branch and then flows into Pen Branch and the Pen Branch Delta.
Truncated Newton-Raphson Methods for Quasicontinuum Simulations
Liang, Yu; Kanapady, Ramdev; Chung, Peter W
.... In this research, we report the effectiveness of the truncated Newton-Raphson method and quasi-Newton method with low-rank Hessian update strategy that are evaluated against the full Newton-Raphson...
Accounts of famous North American Wolves, Canis lupus
Gipson, P.S.; Ballard, W.B.
We examined historical accounts of 59 famous North American Gray Wolves (Canis lupus) reported during the late nineteenth and early twentieth centuries. Fifty of the 59 wolves were purportedly responsible for great losses to livestock, but for 29 reports, evidence suggested that ???2 wolves (e.g., packs) were responsible for the purported kills; in addition, seven wolves had traits that suggested they were hybrids with dogs, and one wolf was probably not from the area where the damage purportedly occurred. Reported livestock losses, especially to Longhorn cattle, from individual wolves appeared excessively high in relation to current literature. Most famous wolves were old and/or impaired from past injuries: 19 were reportedly ???10 years old, 18 had mutilated feet from past trap injuries, and one had a partially severed trachea from being in a snare. Old age and physical impairments probably contributed to livestock depredations by some famous wolves. Several accounts appeared exaggerated, inaccurate, or fabricated. Historical accounts of famous wolves should be interpreted with great caution, especially when considering impacts of wolf reintroductions or when modeling predation rates.
Turning around Newton's Second Law
Goff, John Eric
Conceptual and quantitative difficulties surrounding Newton's second law often arise among introductory physics students. Simply turning around how one expresses Newton's second law may assist students in their understanding of a deceptively simple-looking equation.
NITSOL: A Newton iterative solver for nonlinear systems
Pernice, M. [Univ. of Utah, Salt Lake City, UT (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
Newton iterative methods, also known as truncated Newton methods, are implementations of Newton`s method in which the linear systems that characterize Newton steps are solved approximately using iterative linear algebra methods. Here, we outline a well-developed Newton iterative algorithm together with a Fortran implementation called NITSOL. The basic algorithm is an inexact Newton method globalized by backtracking, in which each initial trial step is determined by applying an iterative linear solver until an inexact Newton criterion is satisfied. In the implementation, the user can specify inexact Newton criteria in several ways and select an iterative linear solver from among several popular {open_quotes}transpose-free{close_quotes} Krylov subspace methods. Jacobian-vector products used by the Krylov solver can be either evaluated analytically with a user-supplied routine or approximated using finite differences of function values. A flexible interface permits a wide variety of preconditioning strategies and allows the user to define a preconditioner and optionally update it periodically. We give details of these and other features and demonstrate the performance of the implementation on a representative set of test problems.
Newton-Hooke Limit of Beltrami-de Sitter Spacetime, Principles of Galilei-Hooke's Relativity and Postulate on Newton-Hooke Universal Time
Huang, Chao-Guang; Guo, Han-Ying; Tian, Yu; Xu, Zhan; Zhou, Bin
Based on the Beltrami-de Sitter spacetime, we present the Newton-Hooke model under the Newton-Hooke contraction of the $BdS$ spacetime with respect to the transformation group, algebra and geometry. It is shown that in Newton-Hooke space-time, there are inertial-type coordinate systems and inertial-type observers, which move along straight lines with uniform velocity. And they are invariant under the Newton-Hooke group. In order to determine uniquely the Newton-Hooke limit, we propose the Gal...
Pen dosimeters
SC/RP Group
The Radiation Protection Group has decided to withdraw all pen dosimeters from the main PS and SPS access points. This will be effective as of January 2006. The following changes will be implemented: All persons working in a limited-stay controlled radiation area must wear an operational dosimeter in addition to their personal DIS dosimeter. Any persons not equipped with this additional dosimeter must contact the SC/RP Group, which will make this type of dosimeter available for temporary loan. A notice giving the phone numbers of the SC/RP Group members to contact will be displayed at the former distribution points for the pen dosimeters. Thank you for your cooperation. The SC/RP Group
Cytotoxicity of Pd nanostructures supported on PEN: Influence of sterilization on Pd/PEN interface
Polívková, M., E-mail: [email protected] [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic); Siegel, J. [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic); Rimpelová, S. [Department of Biochemistry and Microbiology, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic); Hubá�ek, T. [Institute of Hydrobiology, Biology Centre of the AS CR, 370 05 Ceske Budejovice (Czech Republic); Kolská, Z. [Materials Centre of Usti n. L., J.E. Purkyne University, 400 96 Usti nad Labem (Czech Republic); Švor�ík, V. [Department of Solid State Engineering, University of Chemistry and Technology Prague, 166 28 Prague (Czech Republic)
Non-conventional antimicrobial agents, such as palladium nanostructures, have been increasingly used in the medicinal technology. However, experiences uncovering their harmful and damaging effects to human health have begun to appear. In this study, we have focused on in vitro cytotoxicity assessment of Pd nanostructures supported on a biocompatible polymer. Pd nanolayers of variable thicknesses (ranging from 1.1 to 22.4 nm) were sputtered on polyethylene naphthalate (PEN). These nanolayers were transformed by low-temperature post-deposition annealing into discrete nanoislands. Samples were characterized by AFM, XPS, ICP-MS and electrokinetic analysis before and after annealing. Sterilization of samples prior to cytotoxicity testing was done by UV irradiation, autoclave and/or ethanol. Among the listed sterilization techniques, we have chosen the gentlest one which had minimal impact on sample morphology, Pd dissolution and overall Pd/PEN interface quality. Cytotoxic response of Pd nanostructures was determined by WST-1 cell viability assay in vitro using three model cell lines: mouse macrophages (RAW 264.7) and two types of mouse embryonic fibroblasts (L929 and NIH 3T3). Finally, cell morphology in response to Pd/PEN was evaluated by means of fluorescence microscopy. - Highlights: • Annealing of Pd nanolayers on PEN resulted to Pd aggregation and formation of discrete nanoislands. • UV treatment was found as the gentlest sterilization method in term of physicochemical properties of Pd/PEN interface. • Autoclaving and chemical sterilization by ethanol resulted into remarkable changes of Pd/PEN interface. • Cytotoxicity of Pd samples was insignificant. • Pd nanostructures are potentially applicable as health-unobjectionable antibacterial coatings of medical devices.
NovoPen Echo® insulin delivery device
Hyllested-Winge J
Full Text Available Jacob Hyllested-Winge,1 Thomas Sparre,2 Line Kynemund Pedersen2 1Novo Nordisk Pharma Ltd, Tokyo, Japan; 2Novo Nordisk A/S, Søborg, Denmark Abstract: The introduction of insulin pen devices has provided easier, well-tolerated, and more convenient treatment regimens for patients with diabetes mellitus. When compared with vial and syringe regimens, insulin pens offer a greater clinical efficacy, improved quality of life, and increased dosing accuracy, particularly at low doses. The portable and discreet nature of pen devices reduces the burden on the patient, facilitates adherence, and subsequently contributes to the improvement in glycemic control. NovoPen Echo® is one of the latest members of the NovoPen® family that has been specifically designed for the pediatric population and is the first to combine half-unit increment (=0.5 U of insulin dosing with a simple memory function. The half-unit increment dosing amendments and accurate injection of 0.5 U of insulin are particularly beneficial for children (and insulin-sensitive adults/elders, who often require small insulin doses. The memory function can be used to record the time and amount of the last dose, reducing the fear of double dosing or missing a dose. The memory function also provides parents with extra confidence and security that their child is taking insulin at the correct doses and times. NovoPen Echo is a lightweight, durable insulin delivery pen; it is available in two different colors, which may help to distinguish between different types of insulin, providing more confidence for both users and caregivers. Studies have demonstrated a high level of patient satisfaction, with 80% of users preferring NovoPen Echo to other pediatric insulin pens. Keywords: NovoPen Echo®, memory function, half-unit increment dosing, adherence, children, adolescentsÂ
The Penning fusion experiment-ions
Schauer, M. M.; Umstadter, K. R.; Barnes, D. C.
The Penning fusion experiment (PFX) studies the feasibility of using a Penning trap as a fusion confinement device. Such use would require spatial and/or temporal compression of the plasma to overcome the Brillouin density limit imposed by the nonneutrality of Penning trap plasmas. In an earlier experiment, we achieved enhanced plasma density at the center of a pure, electron plasma confined in a hyperbolic, Penning trap by inducing spherically convergent flow in a nonthermal plasma. The goal of this work is to induce similar flow in a positive ion plasma confined in the virtual cathode provided by a spherical, uniform density electron plasma. This approach promises the greatest flexibility in operating with multi-species plasmas (e.g. D + /T + ) or implementing temporal compression schemes such as the Periodically Oscillating Plasma Sphere of Nebel and Barnes. Here, we report on our work to produce and diagnose the necessary electron plasma
From handwriting analysis to pen-computer applications
Schomaker, L
In this paper, pen computing, i.e. the use of computers and applications in which the pen is the main input device, will be described from four different viewpoints. Firstly a brief overview of the hardware developments in pen systems is given, leading to the conclusion that the technological
Newton flows for elliptic functions
Helminck, G.F.; Twilt, F.
Newton flows are dynamical systems generated by a continuous, desingularized Newton method for mappings from a Euclidean space to itself. We focus on the special case of meromorphic functions on the complex plane. Inspired by the analogy between the rational (complex) and the elliptic (i.e., doubly
Introducing Newton and classical physics
Rankin, William
The rainbow, the moon, a spinning top, a comet, the ebb and flood of the oceans ...a falling apple. There is only one universe and it fell to Isaac Newton to discover its secrets. Newton was arguably the greatest scientific genius of all time, and yet he remains a mysterious figure. Written and illustrated by William Rankin, "Introducting Newton and Classical Physics" explains the extraordinary ideas of a man who sifted through the accumulated knowledge of centuries, tossed out mistaken beliefs, and single-handedly made enormous advances in mathematics, mechanics and optics. By the age of 25, entirely self-taught, he had sketched out a system of the world. Einstein's theories are unthinkable without Newton's founding system. He was also a secret heretic, a mystic and an alchemist, the man of whom Edmund Halley said "Nearer to the gods may no man approach!". This is an ideal companion volume to "Introducing Einstein".
From Galileo to Newton
Hall, Alfred Rupert
The near century (1630–1720) that separates the important astronomical findings of Galileo Galilei (1564–1642) and the vastly influential mathematical work of Sir Isaac Newton (1642–1727) represents a pivotal stage of transition in the history of science. Tracing the revolution in physics initiated by Galileo and culminating in Newton's achievements, this book surveys the work of Huygens, Leeuwenhoek, Boyle, Descartes, and others. 35 illustrations.
Subsampled Hessian Newton Methods for Supervised Learning.
Wang, Chien-Chih; Huang, Chun-Heng; Lin, Chih-Jen
Newton methods can be applied in many supervised learning approaches. However, for large-scale data, the use of the whole Hessian matrix can be time-consuming. Recently, subsampled Newton methods have been proposed to reduce the computational time by using only a subset of data for calculating an approximation of the Hessian matrix. Unfortunately, we find that in some situations, the running speed is worse than the standard Newton method because cheaper but less accurate search directions are used. In this work, we propose some novel techniques to improve the existing subsampled Hessian Newton method. The main idea is to solve a two-dimensional subproblem per iteration to adjust the search direction to better minimize the second-order approximation of the function value. We prove the theoretical convergence of the proposed method. Experiments on logistic regression, linear SVM, maximum entropy, and deep networks indicate that our techniques significantly reduce the running time of the subsampled Hessian Newton method. The resulting algorithm becomes a compelling alternative to the standard Newton method for large-scale data classification.
Newton's Cradle in Physics Education
Gauld, Colin F.
Newton's Cradle is a series of bifilar pendulums used in physics classrooms to demonstrate the role of the principles of conservation of momentum and kinetic energy in elastic collisions. The paper reviews the way in which textbooks use Newton's Cradle and points out the unsatisfactory nature of these treatments in almost all cases. The literature…
Newton's Metaphysics of Space as God's Emanative Effect
Jacquette, Dale
In several of his writings, Isaac Newton proposed that physical space is God's "emanative effect" or "sensorium," revealing something interesting about the metaphysics underlying his mathematical physics. Newton's conjectures depart from Plato and Aristotle's metaphysics of space and from classical and Cambridge Neoplatonism. Present-day philosophical concepts of supervenience clarify Newton's ideas about space and offer a portrait of Newton not only as a mathematical physicist but an independent-minded rationalist philosopher.
Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods
Brown, J.; Brune, P.
Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)
Newton flows for elliptic functions: A pilot study
Twilt, F.; Helminck, G.F.; Snuverink, M.; van den Brug, L.
Elliptic Newton flows are generated by a continuous, desingularized Newton method for doubly periodic meromorphic functions on the complex plane. In the special case, where the functions underlying these elliptic Newton flows are of second-order, we introduce various, closely related, concepts of
Negative Ion Sources: Magnetron and Penning
Faircloth, D.C.
The history of the magnetron and Penning electrode geometry is briefly outlined. Plasma generation by electrical discharge-driven electron impact ionization is described and the basic physics of plasma and electrodes relevant to magnetron and Penning discharges are explained. Negative ions and their applications are introduced, along with their production mechanisms. Caesium and surface production of negative ions are detailed. Technical details of how to build magnetron and Penning surface plasma sources are given, along with examples of specific sources from around the world. Failure modes are listed and lifetimes compared.
Faircloth, D C
The history of the magnetron and Penning electrode geometry is briefly outlined. Plasma generation by electrical discharge-driven electron impact ionization is described and the basic physics of plasma and electrodes relevant to magnetron and Penning discharges are explained. Negative ions and their applications are introduced, along with their production mechanisms. Caesium and surface production of negative ions are detailed. Technical details of how to build magnetron and Penning surface plasma sources are given, along with examples of specific sources from around the world. Failure modes are listed and lifetimes compared. (author)
Cortical mechanisms of person representation: recognition of famous and personally familiar names.
Sugiura, Motoaki; Sassa, Yuko; Watanabe, Jobu; Akitsuki, Yuko; Maeda, Yasuhiro; Matsue, Yoshihiko; Fukuda, Hiroshi; Kawashima, Ryuta
Personally familiar people are likely to be represented more richly in episodic, emotional, and behavioral contexts than famous people, who are usually represented predominantly in semantic context. To reveal cortical mechanisms supporting this differential person representation, we compared cortical activation during name recognition tasks between personally familiar and famous names, using an event-related functional magnetic resonance imaging (fMRI). Normal subjects performed familiar- or unfamiliar-name detection tasks during visual presentation of personally familiar (Personal), famous (Famous), and unfamiliar (Unfamiliar) names. The bilateral temporal poles and anterolateral temporal cortices, as well as the left temporoparietal junction, were activated in the contrasts Personal-Unfamiliar and Famous-Unfamiliar to a similar extent. The bilateral occipitotemporoparietal junctions, precuneus, and posterior cingulate cortex showed activation in the contrasts Personal-Unfamiliar and Personal-Famous. Together with previous findings, differential activation in the occipitotemporoparietal junction, precuneus, and posterior cingulate cortex between personally familiar and famous names is considered to reflect differential person representation. The similar extent of activation for personally familiar and famous names in the temporal pole and anterolateral temporal cortex is consistent with the associative role of the anterior temporal cortex in person identification, which has been conceptualized as a person identity node in many models of person identification. The left temporoparietal junction was considered to process familiar written names. The results illustrated the neural correlates of the person representation as a network of discrete regions in the bilateral posterior cortices, with the anterior temporal cortices having a unique associative role.
Isodose plotting for pen plotters
Rosen, I.I.
A general algorithm for treatment plan isodose line plotting is described which is particularly useful for pen plotters. Unlike other methods of plotting isodose lines, this algorithm is designed specifically to reduce pen motion, thereby reducing plotting time and wear on the transport mechanism. Points with the desired dose value are extracted from the dose matrix and stored, sorted into continuous contours, and then plotted. This algorithm has been implemented on DEC PDP-11/60 and VAX-11/780 computers for use with two models of Houston Instrument pen plotters, two models of Tektronix vector graphics terminals, a DEC VT125 raster graphics terminal, and a DEC VS11 color raster graphics terminal. Its execution time is similar to simpler direct-plotting methods
The LongPen--the world's first original remote signing device.
Kruger, Diane M
The LongPen is a remote-controlled pen and videoconferencing device conceived by Canadian author Margaret Atwood in 2004 and initially intended to bring "live" author signings to far away locations. The LongPen allows for individually inscribed long distance signatures and writing while maintaining an original record, written with pen and ink. LongPen specimens were compared with control specimens using different speeds, pen pressures, and types of pens. Preliminary indications are that LongPen inscriptions can be identified or associated with their author. Size and form are maintained and artifacts are subtle. Some limitations with respect to the capture of long tapered strokes, delicate connecting strokes, and differences in line width were noted. Factors which may impact forensic handwriting examinations include limited amounts of writing, light pen pressure, date of the writing, type of writing instrument, dimensions of the writing, and failure to consider that the device has been used.
Pen of Health Care Worker as Vector of Infection
Prashant Patil
Full Text Available Nosocomial infections are the major concern in tertiary hospitals. Health care workers and their belonging are known to act as vector in transmission of infections. In present study, the writing pen of health care workers was worked out for carrying infection. The swab from writing pen of health care workers were cultured for any growth of microorganism and compared with swab from pen of the non health care workers. It was found that the rate of growth of microorganism were more in pen of health care workers. Similarly the organism attributed to the nosocomial infection was grown from the pens of health care workers. These organisms might be transmitted from the hands of health care workers. The writing pen which health care worker are using became the vectors of transmission of infection. So to prevent it, the most important way is to wash the hands and pen properly after examining the patients.
Choosing the forcing terms in an inexact Newton method
Eisenstat, S.C. [Yale Univ., New Haven, CT (United States); Walker, H.F. [Utah State Univ., Logan, UT (United States)
An inexact Newton method is a generalization of Newton`s method for solving F(x) = 0, F: {Re}{sup n} {r_arrow} {Re}{sup n}, in which each step reduces the norm of the local linear model of F. At the kth iteration, the norm reduction is usefully expressed by the inexact Newton condition where x{sub k} is the current approximate solution and s{sub k} is the step. In many applications, an {eta}{sub k} is first specified, and then an S{sub k} is found for which the inexact Newton condition holds. Thus {eta}{sub k} is often called a {open_quotes}forcing term{close_quotes}. In practice, the choice of the forcing terms is usually critical to the efficiency of the method and can affect robustness as well. Here, the authors outline several promising choices, discuss theoretical support for them, and compare their performance in a Newton iterative (truncated Newton) method applied to several large-scale problems.
"To Improve upon Hints of Things": Illustrating Isaac Newton.
Schilt, Cornelis J
When Isaac Newton died in 1727 he left a rich legacy in terms of draft manuscripts, encompassing a variety of topics: natural philosophy, mathematics, alchemy, theology, and chronology, as well as papers relating to his career at the Mint. One thing that immediately strikes us is the textuality of Newton's legacy: images are sparse. Regarding his scholarly endeavours we witness the same practice. Newton's extensive drafts on theology and chronology do not contain a single illustration or map. Today we have all of Newton's draft manuscripts as witnesses of his working methods, as well as access to a significant number of books from his own library. Drawing parallels between Newton's reading practices and his natural philosophical and scholarly work, this paper seeks to understand Newton's recondite writing and publishing politics.
The effects of zonation of the pen and grouping in intact litters on use of pen, immune competence and health of pigs
Damgaard, B.M.; Studnitz, M.; Nielsen, Jens
The effects of pen design and group composition were examined with respect to activity, use of pen, floor conditions, health condition and immune competence for groups of 60 pigs. The experiment was designed with the two factors with zones/without zones and divided litters/intact litters....... The experiment included a total of 1440 pigs from weaning at the age of 4 weeks to the age of 18 weeks after weaning. In pens with zones, the selection of different areas for different activities was improved. Pens with zones were more dirty in the elimination and open areas than pens without zones. In pens...... with zones, the number of lymphocytes was decreased, the ability to respond to an additional challenge by a model infection was decreased and the number of neutrophils was increased in intact litters. In week 9, the health condition was better with a group composition consisting of intact litters compared...
Newton-Cartan gravity and torsion
Bergshoeff, Eric; Chatzistavrakidis, Athanasios; Romano, Luca; Rosseel, Jan
We compare the gauging of the Bargmann algebra, for the case of arbitrary torsion, with the result that one obtains from a null-reduction of General Relativity. Whereas the two procedures lead to the same result for Newton-Cartan geometry with arbitrary torsion, the null-reduction of the Einstein equations necessarily leads to Newton-Cartan gravity with zero torsion. We show, for three space-time dimensions, how Newton-Cartan gravity with arbitrary torsion can be obtained by starting from a Schrödinger field theory with dynamical exponent z = 2 for a complex compensating scalar and next coupling this field theory to a z = 2 Schrödinger geometry with arbitrary torsion. The latter theory can be obtained from either a gauging of the Schrödinger algebra, for arbitrary torsion, or from a null-reduction of conformal gravity.
Penning ionization processes studied by electron spectroscopy
Yencha, A.J.
The technique of measuring the kinetic energy of electrons ejected from atomic or molecular species as a result of collisional energy transfer between a metastable excited rare gas atom and an atom or molecule is known as Penning ionization spectroscopy. Like the analogous photoionization process of photoelectron spectroscopy, a considerable amount of information has been gained about the ionization potentials of numerous molecular systems. It is, in fact, through the combined analyses of photoelectron and Penning electron spectra that affords a probe of the particle-particle interactions that occur in the Penning process. In this paper a short survey of the phenomenon of Penning ionization, as studied by electron spectroscopy, will be presented as it pertains to the ionization processes of simple molecules by metastable excited atoms. (author)
Famous North American wolves and the credibility of early wildlife literature
Gipson, P.S.; Ballard, W.B.; Nowak, R.M.
We evaluated the credibility of early literature about famous North American wolves (Canis lupus). Many famous wolves were reported to be older than they actually were, and we estimated they did not live long enough to have caused purported damage to livestock and game animals. Wolf kill rates on free-ranging livestock appeared to be inflated compared to recently published kill rates on native ungulates and livestock. Surplus killing of sheep and goats may have accounted for some high kill rates, but surplus killing of free-ranging longhorn cattle probably did not occur. Some famous wolves may actually have been dogs (C. familiaris), wolf-dog hybrids, or possibly coyote (C. latrans)-dog hybrids. We documented instances where early authors appeared to embellish or fabricate information about famous wolves. Caution should be exercised when using early literature about wolves as a basis for wolf management decisions.
On Newton-Cartan trace anomalies
Auzzi, Roberto; Baiguera, Stefano; Nardelli, Giuseppe
We classify the trace anomaly for parity-invariant non-relativistic Schrödinger theories in 2+1 dimensions coupled to background Newton-Cartan gravity. The general anomaly structure looks very different from the one in the z=2 Lifshitz theories. The type A content of the anomaly is remarkably identical to that of the relativistic 3+1 dimensional case, suggesting the conjecture that an a-theorem should exist also in the Newton-Cartan context.
Auzzi, Roberto [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore,Via Musei 41, 25121 Brescia (Italy); INFN Sezione di Perugia,Via A. Pascoli, 06123 Perugia (Italy); Baiguera, Stefano [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore,Via Musei 41, 25121 Brescia (Italy); Nardelli, Giuseppe [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore,Via Musei 41, 25121 Brescia (Italy); TIFPA - INFN, c/o Dipartimento di Fisica, Università di Trento,38123 Povo (Italy)
Goethe's Exposure of Newton's theory a polemic on Newton's theory of light and colour
Johann Wolfgang von Goethe, although best known for his literary work, was also a keen and outspoken natural scientist. In the second polemic part of Zur Farbenlehre (Theory of Colours), for example, Goethe attacked Isaac Newton's ground-breaking revelation that light is heterogeneous and not immutable, as was previously thought.This polemic was unanimously rejected by the physicists of the day, and has often been omitted from compendia of Goethe's works. Indeed, although Goethe repeated all of Newton's key experiments, he was never able to achieve the same results. Many reasons have been proposed for this, ranging from the psychological — such as a blind hatred of Newtonism, self-deceit and paranoid psychosis — to accusations of incapability — Goethe simply did not understand the experiments. Yet Goethe was never to be dissuaded from this passionate conviction.This translation of Goethe's second polemic, published for the first time in English, makes it clear that Goethe did understand the thrust of Ne...
THE NOTORIOUS, REPUTED AND FAMOUS TRADEMARKS
Andreea LIVÄ'DARIU
Full Text Available The owner of a trademark that has a reputation in Romania or in the European Union may request to court to forbid the infringer from using, without its consent, a sign identical or similar to its trademark, but for products or services different from those which are sold or provided under said trademark. According to Law no. 84/1998, the notorious (well-known trademark is the trademark which does not necessarily have to be registered under the Trademark law protection. The Romanian doctrine sustains that famous trademarks do exist. In this paper, we shall attempt to find (if it really does exist the difference between notorious (well-known, reputed and famous trademarks, the criteria by means of which these trademarks shall be distinguished and the evidence by means of which the notoriety, reputation or fame of a trademark may be argued. We shall also present the legal regime and our analysis will be based on the Trademark law, doctrine and case-law studies.
Field-Split Preconditioned Inexact Newton Algorithms
Liu, Lulu
The multiplicative Schwarz preconditioned inexact Newton (MSPIN) algorithm is presented as a complement to additive Schwarz preconditioned inexact Newton (ASPIN). At an algebraic level, ASPIN and MSPIN are variants of the same strategy to improve the convergence of systems with unbalanced nonlinearities; however, they have natural complementarity in practice. MSPIN is naturally based on partitioning of degrees of freedom in a nonlinear PDE system by field type rather than by subdomain, where a modest factor of concurrency can be sacrificed for physically motivated convergence robustness. ASPIN, originally introduced for decompositions into subdomains, is natural for high concurrency and reduction of global synchronization. We consider both types of inexact Newton algorithms in the field-split context, and we augment the classical convergence theory of ASPIN for the multiplicative case. Numerical experiments show that MSPIN can be significantly more robust than Newton methods based on global linearizations, and that MSPIN can be more robust than ASPIN and maintain fast convergence even for challenging problems, such as high Reynolds number Navier--Stokes equations.
Liu, Lulu; Keyes, David E.
Newton-type methods for optimization and variational problems
Izmailov, Alexey F
This book presents comprehensive state-of-the-art theoretical analysis of the fundamental Newtonian and Newtonian-related approaches to solving optimization and variational problems. A central focus is the relationship between the basic Newton scheme for a given problem and algorithms that also enjoy fast local convergence. The authors develop general perturbed Newtonian frameworks that preserve fast convergence and consider specific algorithms as particular cases within those frameworks, i.e., as perturbations of the associated basic Newton iterations. This approach yields a set of tools for the unified treatment of various algorithms, including some not of the Newton type per se. Among the new subjects addressed is the class of degenerate problems. In particular, the phenomenon of attraction of Newton iterates to critical Lagrange multipliers and its consequences as well as stabilized Newton methods for variational problems and stabilized sequential quadratic programming for optimization. This volume will b...
Newton's law in de Sitter brane
Ghoroku, Kazuo; Nakamura, Akihiro; Yahiro, Masanobu
Newton potential has been evaluated for the case of dS brane embedded in Minkowski, dS 5 and AdS 5 bulks. We point out that only the AdS 5 bulk might be consistent with the Newton's law from the brane-world viewpoint when we respect a small cosmological constant observed at present universe
3, 2, 1 ... Discovering Newton's Laws
Lutz, Joe; Sylvester, Kevin; Oliver, Keith; Herrington, Deborah
"For every action there is an equal and opposite reaction." "Except when a bug hits your car window, the car must exert more force on the bug because Newton's laws only apply in the physics classroom, right?" Students in our classrooms were able to pick out definitions as well as examples of Newton's three laws; they could…
De innemende verteller Louwrens Penning
Ester, Hans
Dutch author Louwrens Penning (1854 – 1927) indisputably contributed more than any other novelist in the Netherlands to the high degree of solidarity with the Boers before, during and after the Anglo- Boer War 1899 – 1902. Penning, being a believer in the Calvinist tradition, had a profound trust in God's guidance of history. He identified the war of the Boer soldiers against the British imperialists with the rebellion of the Dutch out of their true religious convictions against t...
Newton-Krylov methods applied to nonequilibrium radiation diffusion
Knoll, D.A.; Rider, W.J.; Olsen, G.L.
The authors present results of applying a matrix-free Newton-Krylov method to a nonequilibrium radiation diffusion problem. Here, there is no use of operator splitting, and Newton's method is used to convert the nonlinearities within a time step. Since the nonlinear residual is formed, it is used to monitor convergence. It is demonstrated that a simple Picard-based linearization produces a sufficient preconditioning matrix for the Krylov method, thus elevating the need to form or store a Jacobian matrix for Newton's method. They discuss the possibility that the Newton-Krylov approach may allow larger time steps, without loss of accuracy, as compared to an operator split approach where nonlinearities are not converged within a time step
9 CFR 313.1 - Livestock pens, driveways and ramps.
... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Livestock pens, driveways and ramps... INSPECTION AND CERTIFICATION HUMANE SLAUGHTER OF LIVESTOCK § 313.1 Livestock pens, driveways and ramps. (a) Livestock pens, driveways and ramps shall be maintained in good repair. They shall be free from sharp or...
Newton`s iteration for inversion of Cauchy-like and other structured matrices
Pan, V.Y. [Lehman College, Bronx, NY (United States); Zheng, Ailong; Huang, Xiaohan; Dias, O. [CUNY, New York, NY (United States)
We specify some initial assumptions that guarantee rapid refinement of a rough initial approximation to the inverse of a Cauchy-like matrix, by mean of our new modification of Newton`s iteration, where the input, output, and all the auxiliary matrices are represented with their short generators defined by the associated scaling operators. The computations are performed fast since they are confined to operations with short generators of the given and computed matrices. Because of the known correlations among various structured matrices, the algorithm is immediately extended to rapid refinement of rough initial approximations to the inverses of Vandermonde-like, Chebyshev-Vandermonde-like and Toeplitz-like matrices, where again, the computations are confined to operations with short generators of the involved matrices.
Westfall, Richard S
Definitive, concise, and very interesting... From William Shakespeare to Winston Churchill, the Very Interesting People series provides authoritative bite-sized biographies of Britain's most fascinating historical figures - people whose influence and importance have stood the test of time. Each book in the series is based upon the biographical entry from the world-famous Oxford Dictionary of National Biography. -
Newton-Krylov-Schwarz methods in unstructured grid Euler flow
Keyes, D.E. [Old Dominion Univ., Norfolk, VA (United States)
Newton-Krylov methods and Krylov-Schwarz (domain decomposition) methods have begun to become established in computational fluid dynamics (CFD) over the past decade. The former employ a Krylov method inside of Newton`s method in a Jacobian-free manner, through directional differencing. The latter employ an overlapping Schwarz domain decomposition to derive a preconditioner for the Krylov accelerator that relies primarily on local information, for data-parallel concurrency. They may be composed as Newton-Krylov-Schwarz (NKS) methods, which seem particularly well suited for solving nonlinear elliptic systems in high-latency, distributed-memory environments. We give a brief description of this family of algorithms, with an emphasis on domain decomposition iterative aspects. We then describe numerical simulations with Newton-Krylov-Schwarz methods on an aerodynamic application emphasizing comparisons with a standard defect-correction approach and subdomain preconditioner consistency.
The history of the priority di∫pute between Newton and Leibniz mathematics in history and culture
Sonar, Thomas
This book provides a thrilling history of the famous priority dispute between Gottfried Wilhelm Leibniz and Isaac Newton, presenting the episode for the first time in the context of cultural history. It introduces readers to the background of the dispute, details its escalation, and discusses the aftermath of the big divide, which extended well into recent times. One of the unique features of the book is that the mathematics behind the story is very intelligibly explained – an approach that offers general readers interested in the history of sciences and mathematics a window into the world of these two giants in their field. From the epilogue to the German edition by Eberhard Knobloch: Thomas Sonar has traced the emergence and the escalation of this conflict, which was heightened by Leibniz's rejection of Newton's gravitation theory, in a grandiose, excitingly written monograph. With absolute competence, he also explains the mathematical context so that non-mathematicians will also profit from the book....
Coherent states approach to Penning trap
Fernandez, David J; Velazquez, Mercedes
By using a matrix technique, which allows us to identify directly the ladder operators, the Penning trap coherent states are derived as eigenstates of the appropriate annihilation operators. These states are compared with those obtained through the displacement operator. The associated wavefunctions and mean values for some relevant operators in these states are also evaluated. It turns out that the Penning trap coherent states minimize the Heisenberg uncertainty relation
Conformal mechanics in Newton-Hooke spacetime
Galajinsky, Anton
Conformal many-body mechanics in Newton-Hooke spacetime is studied within the framework of the Lagrangian formalism. Global symmetries and Noether charges are given in a form convenient for analyzing the flat space limit. N=2 superconformal extension is built and a new class on N=2 models related to simple Lie algebras is presented. A decoupling similarity transformation on N=2 quantum mechanics in Newton-Hooke spacetime is discussed.
Good Intentions!: Ten Great Books Which Introduce Readers To a Famous Writer.
Van Deusen, Ann; Hepler, Susan
Offers short descriptions of 10 books for children in which a famous writer appears as an essential character and a catalyst for the plot or content (while another character tells the story). Includes such famous writers as Benjamin Franklin, Emily Dickinson, and William Shakespeare. (SR)
Isaac Newton Olympics.
Cox, Carol
Presents the Isaac Newton Olympics in which students complete a hands-on activity at seven stations and evaluate what they have learned in the activity and how it is related to real life. Includes both student and teacher instructions for three of the activities. (YDS)
Newton's law of cooling revisited
Vollmer, M
The cooling of objects is often described by a law, attributed to Newton, which states that the temperature difference of a cooling body with respect to the surroundings decreases exponentially with time. Such behaviour has been observed for many laboratory experiments, which led to a wide acceptance of this approach. However, the heat transfer from any object to its surrounding is not only due to conduction and convection but also due to radiation. The latter does not vary linearly with temperature difference, which leads to deviations from Newton's law. This paper presents a theoretical analysis of the cooling of objects with a small Biot number. It is shown that Newton's law of cooling, i.e. simple exponential behaviour, is mostly valid if temperature differences are below a certain threshold which depends on the experimental conditions. For any larger temperature differences appreciable deviations occur which need the complete nonlinear treatment. This is demonstrated by results of some laboratory experiments which use IR imaging to measure surface temperatures of solid cooling objects with temperature differences of up to 300 K.
Isaac Newton and the astronomical refraction.
Lehn, Waldemar H
In a short interval toward the end of 1694, Isaac Newton developed two mathematical models for the theory of the astronomical refraction and calculated two refraction tables, but did not publish his theory. Much effort has been expended, starting with Biot in 1836, in the attempt to identify the methods and equations that Newton used. In contrast to previous work, a closed form solution is identified for the refraction integral that reproduces the table for his first model (in which density decays linearly with elevation). The parameters of his second model, which includes the exponential variation of pressure in an isothermal atmosphere, have also been identified by reproducing his results. The implication is clear that in each case Newton had derived exactly the correct equations for the astronomical refraction; furthermore, he was the first to do so.
Administration technique and storage of disposable insulin pens reported by patients with diabetes.
Mitchell, Virginia D; Porter, Kyle; Beatty, Stuart J
The purpose of the study was to evaluate insulin injection technique and storage of insulin pens as reported by patients with diabetes and to compare correct pen use to initial education on injection technique, hemoglobin A1C, duration of insulin therapy, and duration of insulin pen. Cross-sectional questionnaire orally administered to patients at a university-affiliated primary care practice. Subjects were patients with diabetes who were 18 years or older and prescribed a disposable insulin pen for at least 4 weeks. A correct usage score was calculated for each patient based on manufacturer recommendations for disposable insulin pen use. Associations were made between the correct usage score and certainty in technique, initial education, years of insulin therapy, duration of pen use, and hemoglobin A1C. Sixty-seven patients completed the questionnaire, reporting total use of 94 insulin pens. The 3 components most often neglected by patients were priming pen needle, holding for specific count time before withdrawal of pen needle from skin, and storing an in-use pen. For three-fourths of the insulin pens being used, users did not follow the manufacturer's instructions for proper administration and storage of insulin pens. Correct usage scores were significantly higher if initial education on insulin pens was performed by a pharmacist or nurse. The majority of patients may be ignoring or unaware of key components for consistent insulin dosing using disposable insulin pens; therefore, initial education and reeducation on correct use of disposable insulin pens by health care professionals are needed.
Newton slopes for Artin-Schreier-Witt towers
Davis, Christopher; Wan, Daqing; Xiao, Liang
We fix a monic polynomial f(x)∈Fq[x] over a finite field and consider the Artin-Schreier-Witt tower defined by f(x); this is a tower of curves ⋯→Cm→Cm−1→⋯→C0=A1, with total Galois group Zp. We study the Newton slopes of zeta functions of this tower of curves. This reduces to the study of the Newton...... slopes of L-functions associated to characters of the Galois group of this tower. We prove that, when the conductor of the character is large enough, the Newton slopes of the L-function form arithmetic progressions which are independent of the conductor of the character. As a corollary, we obtain...
Pen-and-Paper User Interfaces
Steimle, Jurgen
Even at the beginning of the 21st century, we are far from becoming paperless. Pen and paper is still the only truly ubiquitous information processing technology. Pen-and-paper user interfaces bridge the gap between paper and the digital world. Rather than replacing paper with electronic media, they seamlessly integrate both worlds in a hybrid user interface. Classical paper documents become interactive. This opens up a huge field of novel computer applications at our workplaces and in our homes. This book provides readers with a broad and extensive overview of the field, so as to provide a fu
On the topology of the Newton boundary at infinity
We will be interested in a global version of Le-Ramanujam μ -constant theorem from the Newton polyhedron point of view. More precisely, we prove a stability theorem which says that the global monodromy fibration of a polynomial with Newton non-degenerate is uniquely determined by its Newton boundary at infinity. Besides, the continuity of atypical values for a family of complex polynomial functions also is considered. (author)
How fast is famous face recognition?
Gladys eBarragan-Jason
Full Text Available The rapid recognition of familiar faces is crucial for social interactions. However the actual speed with which recognition can be achieved remains largely unknown as most studies have been carried out without any speed constraints. Different paradigms have been used, leading to conflicting results, and although many authors suggest that face recognition is fast, the speed of face recognition has not been directly compared to fast visual tasks. In this study, we sought to overcome these limitations. Subjects performed three tasks, a familiarity categorization task (famous faces among unknown faces, a superordinate categorization task (human faces among animal ones and a gender categorization task. All tasks were performed under speed constraints. The results show that, despite the use of speed constraints, subjects were slow when they had to categorize famous faces: minimum reaction time was 467 ms, which is 180 ms more than during superordinate categorization and 160 ms more than in the gender condition. Our results are compatible with a hierarchy of face processing from the superordinate level to the familiarity level. The processes taking place between detection and recognition need to be investigated in detail.
La malle de Newton
Verlet, Loup
En 1936, une vente publique ramena au jour le contenu d'une malle où Newton avait enfermé ses manuscrits. Ô surprise, les travaux du savant y voisinaient avec les spéculations de l'exégète et de l'alchimiste. Ce n'est pas seulement la face cachée d'un exceptionnel génie scientifique qui nous était ainsi révélée, mais, au-delà du mystère d'un homme, le secret partage qui gouverne notre univers, comme le montre cette lecture originale de la naissance de la physique moderne.Dans quel monde suis-je tombé ? Pourquoi les choses sont-elles ainsi ? Comment faire avec ? Questions lancinantes de l'enfant quand la mère fait défaut, du chercheur face à la nature qui se dérobe. La réponse, Newton sait où la trouver : Dieu le Père, à jamais insaisissable, est présent «partout et toujours», Il se révèle par la bouche des prophètes, se devine dans les arcanes de l'alchimie, se manifeste par les lois admirables qui règlent le cours ordinaire des choses. Ses écrits de l'ombre l'attestent, Newton ...
Voltaire-Newton... Renversant!
The encounter, even improbable, between François Marie Arouet, said Voltaire, and Isaac Newton could occur only in Pays de Gex, near his city... It's indeed right above of the accelerator, in Saint-Genis, that the meeting between this two "monsters" of the 18e century took place
Early warning of diarrhea and pen fouling in growing pigs using sensor-based monitoring
Jensen, D. B.; Toft, Nils; Kristensen, A. R.
then identify specific pens in need of extra attention. Here we evaluate the value of monitoring live weight, feed usage, humidity, drinking behavior and pen temperature in relation to early warnings of diarrhea and pen fouling in slaughter pigs. Materials and methods: We used data collected in 16 pens (8...... double-pens) between November 2013 and December 2014 at a commercial Danish farm. During this time, three new batches were inserted. We monitored the mean live weight of the pigs per pen (weekly, only in 4 pens), feed usage per double-pen (daily), humidity per section (daily), temperature at two...... positions per pen (hourly), water flow per double-pen (liters/hour/pig) and drinking frequency per pen (activations/hour/pig). Staff registrations of diarrhea and pen fouling were the events of interest. The data were divided into a learning set (15 events) and a test set (18 events). The data were modeled...
Tail posture predicts tail biting outbreaks at pen level in weaner pigs
Lahrmann, Helle Pelant; Hansen, Christian Fink; D'Eath, Rick
posture and behaviour could be detected at pen level between upcoming tail biting pens (T-pens) and control pens (C-pens). The study included 2301 undocked weaner pigs in 74 pens (mean 31.1 pigs/pen; SD 1.5). Tails were scored three times weekly (wound freshness, wound severity and tail length) between 07......Detecting a tail biting outbreak early is essential to reduce the risk of pigs getting severe tail damage. A few previous studies suggest that tail posture and behavioural differences can predict an upcoming outbreak. The aim of the present study was therefore to investigate if differences in tail......:00 h-14:00 h from weaning until a tail biting outbreak. An outbreak (day 0) occurred when at least four pigs had a tail damage, regardless of wound freshness. On average 7.6 (SD 4.3) pigs had a damaged tail (scratches + wound) in T-pens on day 0. Tail posture and behaviour (activity, eating...
Humalog(®) KwikPen™: an insulin-injecting pen designed for ease of use.
Schwartz, Sherwyn L; Ignaut, Debra A; Bodie, Jennifer N
Insulin pens offer significant benefits over vial and syringe injections for patients with diabetes who require insulin therapy. Insulin pens are more discreet, easier for patients to hold and inject, and provide better dosing accuracy than vial and syringe injections. The Humalog(®) KwikPen™ (prefilled insulin lispro [Humalog] pen, Eli Lilly and Company, Indianapolis, IN, USA) is a prefilled insulin pen highly rated by patients for ease of use in injections, and has been preferred by patients to both a comparable insulin pen and to vial and syringe injections in comparator studies. Together with an engineering study demonstrating smoother injections and reduced dosing error versus a comparator pen, recent evidence demonstrates the Humalog KwikPen device is an accurate, easy-to-use, patient-preferred insulin pen.
Famous puzzles of great mathematicians
Petković, Miodrag S
This entertaining book presents a collection of 180 famous mathematical puzzles and intriguing elementary problems that great mathematicians have posed, discussed, and/or solved. The selected problems do not require advanced mathematics, making this book accessible to a variety of readers. Mathematical recreations offer a rich playground for both amateur and professional mathematicians. Believing that creative stimuli and aesthetic considerations are closely related, great mathematicians from ancient times to the present have always taken an interest in puzzles and diversions. The goal of this
Newton's law in braneworlds with an infinite extra dimension
Ito, Masato
We study the behavior of the four$-$dimensional Newton's law in warped braneworlds. The setup considered here is a $(3+n)$-brane embedded in $(5+n)$ dimensions, where $n$ extra dimensions are compactified and a dimension is infinite. We show that the wave function of gravity is described in terms of the Bessel functions of $(2+n/2)$-order and that estimate the correction to Newton's law. In particular, the Newton's law for $n=1$ can be exactly obtained.
Hukum Newton Tentang Gerak Dalam Ruang Fase Tak Komutatif
Purwanto, Joko
In this paper, the Newton's law of motions in a noncomutative phase space has been investigated. Its show that correction to the Newton's first and second law appear if we assume that the phase space has symplectic structure consistent with the rules of comutation of the noncomutative quantum mechanics. In the free particle and harmonic oscillator case the equations of motion are derived on basis of the modified Newton's second law in a noncomutative phase space.
Newton and the origin of civilization
Buchwald, Jed Z
Isaac Newton's Chronology of Ancient Kingdoms Amended, published in 1728, one year after the great man's death, unleashed a storm of controversy. And for good reason. The book presents a drastically revised timeline for ancient civilizations, contracting Greek history by five hundred years and Egypt's by a millennium. Newton and the Origin of Civilization tells the story of how one of the most celebrated figures in the history of mathematics, optics, and mechanics came to apply his unique ways of thinking to problems of history, theology, and mythology, and of how his radical ideas produced an uproar that reverberated in Europe's learned circles throughout the eighteenth century and beyond. Jed Buchwald and Mordechai Feingold reveal the manner in which Newton strove for nearly half a century to rectify universal history by reading ancient texts through the lens of astronomy, and to create a tight theoretical system for interpreting the evolution of civilization on the basis of population dynamics. It was duri...
Of Papers and Pens: Polysemes and Homophones in Lexical (Mis)Selection
Li, Leon; Slevc, L. Robert
Every word signifies multiple senses. Many studies using comprehension-based measures suggest that polysemes' senses (e.g., "paper" as in "printer paper" or "term paper") share lexical representations, whereas homophones' meanings (e.g., "pen" as in "ballpoint pen" or "pig pen")…
Some Elementary Examples from Newton's Principia -R-ES ...
ing both differential and integral calculus. Newton used many geometrical methods extensively to derive the re- sults in spite of his having discovered calculus. Geome- try, judiciously used with limiting procedures, was one principal strategy used by Newton in the Principia. The Principia presents an enormous range of ...
XMM-Newton On-demand Reprocessing Using SaaS Technology
Ibarra, A.; Fajersztejn, N.; Loiseau, N.; Gabriel, C.
We present here the architectural design of the new on-the-fly reprocessing capabilities that will be soon developed and implemented in the new XMM-Newton Science Operation Centre. The inclusion of processing capabilities into the archive, as we plan, will be possible thanks to the recent refurbishment of the XMM-Newton science archive, its alignment with the latest web technologies and the XMM-Newton Remote Interface for Science Analysis (RISA), a revolutionary idea of providing processing capabilities through internet services.
Essential Characteristics of Plasma Antennas Filled with He-Ar Penning Gases
Sun Naifeng; Li Wenzhong; Wang Shiqing; Li Jian; Ci Jiaxiang
Based on the essential theory of Penning gases, the discharge characteristics of He-Ar Penning gases in insulating tubes were analyzed qualitatively. The relation between the effective length of an antenna column filled with He-Ar Penning gases and the applied radio frequency (RF) power was investigated both theoretically and experimentally. The distribution of the plasma density along the antenna column in different conditions was studied. The receiving characteristics of local frequency modulated (FM) electromagnetic waves by the plasma antenna filled with He-Ar Penning gases were compared with those by an aluminum antenna with the same dimensions. Results show that it is feasible to take plasma antennas filled with He-Ar Penning gases as receiving antennas.
Newton's Contributions to Optics
creativity is apparent, even in ideas and models in optics that were ... Around Newton's time, a number of leading figures in science ..... successive circles increased as integers, i.e. d increases by inte- ... of easy reflections and transmission".
Ease of use and patient preference injection simulation study comparing two prefilled insulin pens.
Clark, Paula E; Valentine, Virginia; Bodie, Jennifer N; Sarwat, Samiha
To determine patient ease of use and preference for the Humalog KwikPen* (prefilled insulin lispro [Humalog dagger] pen, Eli Lilly and Company, Indianapolis, IN, USA) (insulin lispro pen) versus the Next Generation FlexPen double dagger (prefilled insulin aspart [NovoRapid section sign ] pen, Novo Nordisk A/S, Bagsvaerd, Denmark) (insulin aspart pen). This was a randomized, open-label, 2-period, 8-sequence crossover study in insulin pen-naïve patients with diabetes. Randomized patients (N = 367) received device training, then simulated low- (15 U) and high- (60 U) dose insulin injections with an appliance. Patients rated pens using an ease of use questionnaire and were asked separately for final pen preferences. The Insulin Device 'Ease of Use' Battery is a 10-item questionnaire with a 7-point scale (higher scores reflect greater ease of use). The primary objective was to determine pen preference for 'easy to press to inject my dose' (by comparing composite scores [low- plus high-dose]). Secondary objectives were to determine pen preference on select questionnaire items (from composite scores), final pen preference, and summary responses for all questionnaire items. On the primary endpoint, 'easy to press to inject my dose,' a statistically significant majority of patients with a preference chose the insulin lispro pen over the insulin aspart pen (68.4%, 95% CI = 62.7-73.6%). Statistically significant majorities of patients with a preference also favored the insulin lispro pen on secondary items: 'easy to hold in my hand when I inject' (64.9%, 95% CI = 58.8-70.7%), 'easy to use when I am in a public place' (67.5%, 95% CI = 61.0-73.6%), and 'overall easy to use' (69.9%, 95% CI = 63.9-75.4%). A statistically significant majority of patients had a final preference for the insulin lispro pen (67.3%, 95% CI = 62.2-72.1%). Among pen-naïve patients with diabetes who had a preference, the majority preferred the insulin lispro pen over the insulin aspart pen with regard
Disformal transformation in Newton-Cartan geometry
Huang, Peng [Zhejiang Chinese Medical University, Department of Information, Hangzhou (China); Sun Yat-Sen University, School of Physics and Astronomy, Guangzhou (China); Yuan, Fang-Fang [Nankai University, School of Physics, Tianjin (China)
Newton-Cartan geometry has played a central role in recent discussions of the non-relativistic holography and condensed matter systems. Although the conformal transformation in non-relativistic holography can easily be rephrased in terms of Newton-Cartan geometry, we show that it requires a nontrivial procedure to arrive at the consistent form of anisotropic disformal transformation in this geometry. Furthermore, as an application of the newly obtained transformation, we use it to induce a geometric structure which may be seen as a particular non-relativistic version of the Weyl integrable geometry. (orig.)
Bargmann structures and Newton-Cartan theory
Duval, C.; Burdet, G.; Kuenzle, H.P.; Perrin, M.
It is shown that Newton-Cartan theory of gravitation can best be formulated on a five-dimensional extended space-time carrying a Lorentz metric together with a null parallel vector field. The corresponding geometry associated with the Bargmann group (nontrivially extended Galilei group) viewed as a subgroup of the affine de Sitter group AO(4,1) is thoroughly investigated. This new global formalism allows one to recast classical particle dynamics and the Schroedinger equation into a purely covariant form. The Newton-Cartan field equations are readily derived from Einstein's Lagrangian on the space-time extension
Eye-openers from XMM-Newton
many years of work. They are all that we hoped they would be. In the LMC we can see the elements, which go to make up new stars and planets, being released in giant stellar explosions. We can even see the creation of new stars going on, using elements scattered through space by previous stellar explosions. This is what we built the EPIC cameras for and they are really fulfilling their promise" Multiwavelength views of Hickson Group 16 The HCG-16 viewed by EPIC and by the Optical Monitor in the visible and ultraviolet wavelengths is one of approximately a hundred compact galaxy clusters listed by Canadian astronomer Paul Hickson in the 1980s. The criteria for the Hickson cluster groups included their compactness, their isolation from other galaxies and a limited magnitude range between their members. Most Hicksons are very faint, but a few can be observed with modest aperture telescopes. Galaxies in Hickson groups have a high probability of interacting. Their study has shed light on the question of galactic evolution and the effects of interaction. Investigation into their gravitational behaviour has also significantly contributed to our understanding of "dark matter", the mysterious matter that most astronomers feel comprises well over 90% of our universe. Observation of celestial objects from space over a range of X-ray, ultraviolet and visible wavelengths, is a unique feature of the XMM-Newton mission. The EPIC-PN view of the Hickson 16 group shows a handful of bright X-sources and in the background more than a hundred faint X-ray sources that XMM-Newton is revealing for the first time. Juxtaposing the X-ray view of HCG 16 with that of the Optical Monitor reveals one of the great strengths of XMM-Newton in being able to routinely compare the optical, ultraviolet and X-ray properties of objects. Many of the X-ray sources are revealed as elongated "fuzzy blobs" coincident with some of the optical galaxies. Routine access to ultraviolet images is a first for the mission
Continuation Newton methods
Axelsson, Owe; Sysala, Stanislav
Ro�. 70, �. 11 (2015), s. 2621-2637 ISSN 0898-1221 R&D Projects: GA ČR GA13-18652S Institutional support: RVO:68145535 Keywords : system of nonlinear equations * Newton method * load increment method * elastoplasticity Subject RIV: IN - Informatics, Computer Science Impact factor: 1.398, year: 2015 http://www.sciencedirect.com/science/article/pii/S0898122115003818
Newtons law of funding
Isaac Newton, besides being the founder of modern physics, was also master of Britain's mint. That is a precedent which many British physicists must surely wish had become traditional. At the moment, money for physics is in short supply in Britain.
A description of the FAMOUS (version XDBUA climate model and control run
A. Osprey
Full Text Available FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Newton force from wave function collapse: speculation and test
Diósi, Lajos
The Diosi-Penrose model of quantum-classical boundary postulates gravity-related spontaneous wave function collapse of massive degrees of freedom. The decoherence effects of the collapses are in principle detectable if not masked by the overwhelming environmental decoherence. But the DP (or any other, like GRW, CSL) spontaneous collapses are not detectable themselves, they are merely the redundant formalism of spontaneous decoherence. To let DP collapses become testable physics, recently we extended the DP model and proposed that DP collapses are responsible for the emergence of the Newton gravitational force between massive objects. We identified the collapse rate, possibly of the order of 1/ms, with the rate of emergence of the Newton force. A simple heuristic emergence (delay) time was added to the Newton law of gravity. This non-relativistic delay is in peaceful coexistence with Einstein's relativistic theory of gravitation, at least no experimental evidence has so far surfaced against it. We derive new predictions of such a 'lazy' Newton law that will enable decisive laboratory tests with available technologies. The simple equation of 'lazy' Newton law deserves theoretical and experimental studies in itself, independently of the underlying quantum foundational considerations.
Pen-based Interfaces for Engineering and Education
Stahovich, Thomas F.
Sketches are an important problem-solving tool in many fields. This is particularly true of engineering design, where sketches facilitate creativity by providing an efficient medium for expressing ideas. However, despite the importance of sketches in engineering practice, current engineering software still relies on traditional mouse and keyboard interfaces, with little or no capabilities to handle free-form sketch input. With recent advances in machine-interpretation techniques, it is now becoming possible to create practical interpretation-based interfaces for such software. In this chapter, we report on our efforts to create interpretation techniques to enable pen-based engineering applications. We describe work on two fundamental sketch understanding problems. The first is sketch parsing, the task of clustering pen strokes or geometric primitives into individual symbols. The second is symbol recognition, the task of classifying symbols once they have been located by a parser. We have used the techniques that we have developed to construct several pen-based engineering analysis tools. These are used here as examples to illustrate our methods. We have also begun to use our techniques to create pen-based tutoring systems that scaffold students in solving problems in the same way they would ordinarily solve them with paper and pencil. The chapter concludes with a brief discussion of these systems.
The LEBIT 9.4 T Penning trap system
Ringle, R.; Bollen, G.; Schury, P.; Sun, T. [National Superconducting Cyclotron Laboratory, East Lansing, MI (United States); Michigan State University, Department of Physics and Astronomy, East Lansing, MI (United States); Lawton, D.; Schwarz, S. [National Superconducting Cyclotron Laboratory, East Lansing, MI (United States)
The initial experimental program with the Low-Energy Beam and Ion Trap Facility, or LEBIT, will concentrate on Penning trap mass measurements of rare isotopes, delivered by the Coupled Cyclotron Facility (CCF) of the NSCL. The LEBIT Penning trap system has been optimized for high-accuracy mass measurements of very short-lived isotopes. (orig.)
Ringle, R.; Bollen, G.; Schury, P.; Sun, T.; Lawton, D.; Schwarz, S.
Science and Society: The Case of Acceptance of Newtonian Optics in the Eighteenth Century
Silva, Cibelle Celestino; Moura, Breno Arsioli
The present paper presents a historical study on the acceptance of Newton's corpuscular theory of light in the early eighteenth century. Isaac Newton first published his famous book "Opticks" in 1704. After its publication, it became quite popular and was an almost mandatory presence in cultural life of Enlightenment societies. However, Newton's…
INVESTIGATION OF THE MISCONCEPTION IN NEWTON II LAW
Yudi Kurniawan
Full Text Available This study aims to provide a comprehensive description of the level of the number of students who have misconceptions about Newton's II Law. This research is located at one State Junior High School in Kab. Pandeglang. The purposive sampling was considering used in this study because it is important to distinguish students who do not know the concept of students who experience misconception. Data were collected using a three tier-test diagnostic test and analyzed descriptively quantitatively. The results showed that the level of misconception was in the two categories of high and medium levels. It needs an innovative teaching technique for subsequent research to treat Newton's Newton misconception.
Preconditioned Inexact Newton for Nonlinear Sparse Electromagnetic Imaging
Desmal, Abdulla
Newton-type algorithms have been extensively studied in nonlinear microwave imaging due to their quadratic convergence rate and ability to recover images with high contrast values. In the past, Newton methods have been implemented in conjunction with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm is formulated and implemented in conjunction with a linear sparse optimization scheme. A novel preconditioning technique is proposed to increase the convergence rate of the optimization problem. Numerical results demonstrate that the proposed framework produces sharper and more accurate images when applied in sparse/sparsified domains.
Desmal, Abdulla; Bagci, Hakan
Bell's theorem, the measurement problem, Newton's self-gravitation and its connections to violations of the discrete symmetries C, P, T
Hiesmayr, Beatrix C.
About 50 years ago John St. Bell published his famous Bell theorem that initiated a new field in physics. This contribution discusses how discrete symmetries relate to the big open questions of quantum mechanics, in particular: (i) how correlations stronger than those predicted by theories sharing randomness (Bell's theorem) relate to the violation of the CP symmetry and the P symmetry; and its relation to the security of quantum cryptography, (ii) how the measurement problem ("why do we observe no tables in superposition?�) can be polled in weakly decaying systems, (iii) how strongly and weakly interacting quantum systems are affected by Newton's self gravitation. These presented preliminary results show that the meson-antimeson systems and the hyperon- antihyperon systems are a unique laboratory to tackle deep fundamental questions and to contribute to the understand what impact the violation of discrete symmetries has.
A numerical method to solve the 1D and the 2D reaction diffusion equation based on Bessel functions and Jacobian free Newton-Krylov subspace methods
Parand, K.; Nikarya, M.
In this paper a novel method will be introduced to solve a nonlinear partial differential equation (PDE). In the proposed method, we use the spectral collocation method based on Bessel functions of the first kind and the Jacobian free Newton-generalized minimum residual (JFNGMRes) method with adaptive preconditioner. In this work a nonlinear PDE has been converted to a nonlinear system of algebraic equations using the collocation method based on Bessel functions without any linearization, discretization or getting the help of any other methods. Finally, by using JFNGMRes, the solution of the nonlinear algebraic system is achieved. To illustrate the reliability and efficiency of the proposed method, we solve some examples of the famous Fisher equation. We compare our results with other methods.
There is grandeur in this view of Newton: Charles Darwin, Isaac Newton and Victorian conceptions of scientific virtue.
Bellon, Richard
For Victorian men of science, the scientific revolution of the seventeenth century represented a moral awakening. Great theoretical triumphs of inductive science flowed directly from a philosophical spirit that embraced the virtues of self-discipline, courage, patience and humility. Isaac Newton exemplified this union of moral and intellectual excellence. This, at least, was the story crafted by scientific leaders like David Brewster, Thomas Chalmers, John Herschel, Adam Sedgwick and William Whewell. Not everyone accepted this reading of history. Evangelicals who decried the 'materialism' of mainstream science assigned a different meaning to Newton's legacy on behalf of their 'scriptural' alternative. High-church critics of science like John Henry Newman, on the other hand, denied that Newton's secular achievements carried any moral significance at all. These debates over Newtonian standards of philosophical behavior had a decisive influence on Charles Darwin as he developed his theory of evolution by natural selection. Copyright © 2014 Elsevier Ltd. All rights reserved.
Quasi-Newton methods for implicit black-box FSI coupling
CSIR Research Space (South Africa)
Bogaers, Alfred EJ
Full Text Available In this paper we introduce a new multi-vector update quasi-Newton (MVQN) method for implicit coupling of partitioned, transient FSI solvers. The new quasi-Newton method facilitates the use of 'black-box' field solvers and under certain circumstances...
Expanding Newton Mechanics with Neutrosophy and Quadstage Method ──New Newton Mechanics Taking Law of Conservation of Energy as Unique Source Law
Fu Yuhua
Full Text Available Neutrosophy is a new branch of philosophy, and "Quad-stage" (Four stages is the expansion of Hegel's triad thesis, antithesis, synthesis of development. Applying Neutrosophy and "Quad-stage" method, the purposes of this paper are expanding Newton Mechanics and making it become New Newton Mechanics (NNW taking law of conservation of energy as unique source law. In this paper the examples show that in some cases other laws may be contradicted with the law of conservation of energy. The original Newton's three laws and the law of gravity, in principle can be derived by the law of conservation of energy. Through the example of free falling body, this paper derives the original Newton's second law by using the law of conservation of energy, and proves that there is not the contradiction between the original law of gravity and the law of conservation of energy; and through the example of a small ball rolls along the inclined plane (belonging to the problem cannot be solved by general relativity that a body is forced to move in flat space, derives improved Newton's second law and improved law of gravity by using law of conservation of energy. Whether or not other conservation laws (such as the law of conservation of momentum and the law of conservation of angular momentum can be utilized, should be tested by law of conservation of energy. When the original Newton's second law is not correct, then the laws of conservation of momentum and angular momentum are no longer correct; therefore the general forms of improved law of conservation of momentum and improved law of conservation of angular momentum are presented. In the cases that law of conservation of energy cannot be used effectively, New Newton Mechanics will not exclude that according to other theories or accurate experiments to derive the laws or formulas to solve some specific problems. For example, with the help of the result of general relativity, the improved Newton's formula of universal
Dynamic Newton-Puiseux Theorem
Mannaa, Bassel; Coquand, Thierry
A constructive version of Newton-Puiseux theorem for computing the Puiseux expansions of algebraic curves is presented. The proof is based on a classical proof by Abhyankar. Algebraic numbers are evaluated dynamically; hence the base field need not be algebraically closed and a factorization...
[Application of qualitative interviews in inheritance research of famous old traditional Chinese medicine doctors: ideas and experience].
Luo, Jing; Fu, Chang-geng; Xu, Hao
The inheritance of famous old traditional Chinese medicine (TCM) doctors plays an essential role in the fields of TCM research. Qualitative interviews allow for subjectivity and individuality within clinical experience as well as academic ideas of doctors, making it a potential appropriate research method for inheritance of famous old TCM doctors. We summarized current situations of inheritance research on famous old TCM doctors, and then discussed the feasibility of applying qualitative interviews in inheritance of famous old TCM doctors. By combining our experience in research on inheritance of famous old TCM doctors, we gave some advice on study design, interview implementation, data transcription and analyses , and report writing, providing a reference for further relevant research.
Newton\\'s equation of motion in the gravitational field of an oblate ...
In this paper, we derived Newton's equation of motion for a satellite in the gravitational scalar field of a uniformly rotating, oblate spheriodal Earth using spheriodal coordinates. The resulting equation is solved for the corresponding precession and the result compared with similar ones. JONAMP Vol. 11 2007: pp. 279-286Â ...
Correlates of cannabis vape-pen use and knowledge among U.S. college students
Tessa Frohe
Full Text Available Introduction: The proliferation of electronic devices, such as vape-pens, has provided alternative means for cannabis use. Research has found cannabis-vaping (i.e., vape-pen use is associated with lower perceived risks and higher cannabis use. Knowledge of these products may increase likelihood of subsequent use. As policies for cannabis shift, beliefs that peers and family approve of this substance use (injunctive norms increase and there has been an increase in vape-pen use among young adults (18–35year olds; however, correlates thereof remain unknown. Young adults often engage in cross-substance use with cannabis and alcohol, making alcohol a potential correlate of cannabis vape-pen use and knowledge. Therefore, we examined alcohol use and other potential correlates of vape-pen use and knowledge among a sample of university students. Methods: This secondary data analysis utilized surveys at multiple colleges in the U.S. (N=270. Alcohol use, social anxiety, cannabis expectancies, injunctive and descriptive norms and facets of impulsivity were examined as correlates of vape-pen use and knowledge using bivariate correlations and logistic regressions. Results: Alcohol use was correlated with cannabis vape-pen use and knowledge. Frequency of cannabis use, peer injunctive norms, and positive expectancies were associated with increased likelihood of vape-pen use. Lack of premeditation, a facet of impulsivity, was associated with cannabis vape-pen knowledge. Conclusions: Given the unknown nature and consequences of cannabis vape-pens, the present findings offer valuable information on correlates of this behavior. Further, correlates of knowledge of vape-pens may point to areas for education and clinical intervention to prevent heavy cannabis vape-pen use. Keywords: Marijuana, Vaporizer, College students, Substance use, Attitudes, Cannabis
Recognizing famous voices: influence of stimulus duration and different types of retrieval cues.
Schweinberger, S R; Herholz, A; Sommer, W
The current investigation measured the effects of increasing stimulus duration on listeners' ability to recognize famous voices. In addition, the investigation studied the influence of different types of cues on the naming of voices that could not be named before. Participants were presented with samples of famous and unfamiliar voices and were asked to decide whether or not the samples were spoken by a famous person. The duration of each sample increased in seven steps from 0.25 s up to a maximum of 2 s. Voice recognition improvements with stimulus duration were with a growth function. Gains were most rapid within the first second and less pronounced thereafter. When participants were unable to name a famous voice, they were cued with either a second voice sample, the occupation, or the initials of the celebrity. Initials were most effective in eliciting the name only when semantic information about the speaker had been accessed prior to cue presentation. Paralleling previous research on face naming, this may indicate that voice naming is contingent on previous activation of person-specific semantic information.
Coupling of partitioned physics codes with quasi-Newton methods
Haelterman, R
Full Text Available , A class of methods for solving nonlinear simultaneous equations. Math. Comp. 19, pp. 577–593 (1965) [3] C.G. Broyden, Quasi-Newton methods and their applications to function minimization. Math. Comp. 21, pp. 368–381 (1967) [4] J.E. Dennis, J.J. More...´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [5] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [6] G. Dhondt, CalculiX CrunchiX USER'S MANUAL Version 2...
Various Newton-type iterative methods for solving nonlinear equations
Full Text Available The aim of the present paper is to introduce and investigate new ninth and seventh order convergent Newton-type iterative methods for solving nonlinear equations. The ninth order convergent Newton-type iterative method is made derivative free to obtain seventh-order convergent Newton-type iterative method. These new with and without derivative methods have efficiency indices 1.5518 and 1.6266, respectively. The error equations are used to establish the order of convergence of these proposed iterative methods. Finally, various numerical comparisons are implemented by MATLAB to demonstrate the performance of the developed methods.
Problem in Two Unknowns: Robert Hooke and a Worm in Newton's Apple.
Weinstock, Robert
Discusses the place that Robert Hooke has in science history versus the scientific contributions he made. Examines the relationship between Hooke and his contemporary, Isaac Newton, and Hooke's claims that Newton built on his ideas without receiving Newton's recognition. (26 references) (MDH)
Determinants of famous name processing speed: age of acquisition versus semantic connectedness.
Smith-Spark, James H; Moore, Viv; Valentine, Tim
The age of acquisition (AoA) and the amount of biographical information known about celebrities have been independently shown to influence the processing of famous people. In this experiment, we investigated the facilitative contribution of both factors to famous name processing. Twenty-four mature adults participated in a familiarity judgement task, in which the names of famous people were grouped orthogonally by AoA and by the number of bits of biographical information known about them (number of facts known; NoFK). Age of acquisition was found to have a significant effect on both reaction time (RT) and accuracy of response, but NoFK did not. The RT data also revealed a significant AoA×NoFK interaction. The amount of information known about a celebrity played a facilitative role in the processing of late-acquired, but not early-acquired, celebrities. Once AoA is controlled, it would appear that the semantic system ceases to have a significant overall influence on the processing of famous people. The pre-eminence of AoA over semantic connectedness is considered in the light of current theories of AoA and how their influence might interact. Copyright © 2012 Elsevier B.V. All rights reserved.
Non-Relativistic Twistor Theory and Newton-Cartan Geometry
Dunajski, Maciej; Gundry, James
We develop a non-relativistic twistor theory, in which Newton-Cartan structures of Newtonian gravity correspond to complex three-manifolds with a four-parameter family of rational curves with normal bundle O oplus O(2)}. We show that the Newton-Cartan space-times are unstable under the general Kodaira deformation of the twistor complex structure. The Newton-Cartan connections can nevertheless be reconstructed from Merkulov's generalisation of the Kodaira map augmented by a choice of a holomorphic line bundle over the twistor space trivial on twistor lines. The Coriolis force may be incorporated by holomorphic vector bundles, which in general are non-trivial on twistor lines. The resulting geometries agree with non-relativistic limits of anti-self-dual gravitational instantons.
On the classification of plane graphs representing structurally stable rational Newton flows
Jongen, H.Th.; Jonker, P.; Twilt, F.
We study certain plane graphs, called Newton graphs, representing a special class of dynamical systems which are closely related to Newton's iteration method for finding zeros of (rational) functions defined on the complex plane. These Newton graphs are defined in terms of nonvanishing angles
New York, Weegee the Famous
Chastagner, Claude
« Weegee the famous » : le surnom dont s'est affublé Arthur H. Fellig évoque plus, dans sa vantardise ironique, le bonimenteur de cirque, le carnival barker, que le photographe de talent. Rien non plus dans son allure physique qui incite à l'inclure dans le cercle fermé des « vrais » artistes, des grands photographes américains célébrés par les magazines et les rétrospectives, comme Ansel Adams ou Walker Evans, même si lui aussi a eu les honneurs d'Hollywood, de Vogue et du MOMA. Avec ses com...
Claude Chastagner
Full Text Available « Weegee the famous » : le surnom dont s'est affublé Arthur H. Fellig évoque plus, dans sa vantardise ironique, le bonimenteur de cirque, le carnival barker, que le photographe de talent. Rien non plus dans son allure physique qui incite à l'inclure dans le cercle fermé des « vrais » artistes, des grands photographes américains célébrés par les magazines et les rétrospectives, comme Ansel Adams ou Walker Evans, même si lui aussi a eu les honneurs d'Hollywood, de Vogue et du MOMA. Avec ses com...
Living Classrooms: Learning Guide for Famous & Historic Trees.
American Forest Foundation, Washington, DC.
This guide provides information to create and care for a Famous and Historic Trees Living Classroom in which students learn American history and culture in the context of environmental change. The booklet contains 10 hands-on activities that emphasize observation, critical thinking, and teamwork. Worksheets and illustrations provide students with…
Eigenvalue Decomposition-Based Modified Newton Algorithm
Wen-jun Wang
Full Text Available When the Hessian matrix is not positive, the Newton direction may not be the descending direction. A new method named eigenvalue decomposition-based modified Newton algorithm is presented, which first takes the eigenvalue decomposition of the Hessian matrix, then replaces the negative eigenvalues with their absolute values, and finally reconstructs the Hessian matrix and modifies the searching direction. The new searching direction is always the descending direction. The convergence of the algorithm is proven and the conclusion on convergence rate is presented qualitatively. Finally, a numerical experiment is given for comparing the convergence domains of the modified algorithm and the classical algorithm.
Newton da Costa and the school of Curitiba
Artibano Micali
Full Text Available This paper intends to report on the beginning of the publications of Newton da Costa outside Brazil. Two mathematicians played an important role in this beginning: Marcel Guillaume from the University of Clermont-Ferrand and Paul Dedecker from the Universities of Lille and Liège. At the same time we recall the role played by Newton da Costa and Jayme Machado Cardoso in the development of what we call here the School of Curitiba [Escola de Curitiba]. Paraconsistent logic was initiated in this school under the influence of Newton da Costa. As another contribution of this school we mention the development of the theory of quasigroups; Jayme Machado Cardoso's name has been given, by Sade, to some particular objects which are now called Cardoso quasigroups.
Students' misconceptions about Newton's second law in outer space
Temiz, B K; Yavuz, A
Students' misconceptions about Newton's second law in frictionless outer space were investigated. The research was formed according to an epistemic game theoretical framework. The term 'epistemic' refers to students' participation in problem-solving activities as a means of constructing new knowledge. The term 'game' refers to a coherent activity that consists of moves and rules. A set of questions in which students are asked to solve two similar Newton's second law problems, one of which is on the Earth and the other in outer space, was administered to 116 undergraduate students. The findings indicate that there is a significant difference between students' epistemic game preferences and race-type (outer space or frictional surface) question. So students who used Newton's second law on the ground did not apply this law and used primitive reasoning when it came to space. Among these students, voluntary interviews were conducted with 18 students. Analysis of interview transcripts showed that: (1) the term 'space' causes spontaneity among students that prevents the use of the law; (2) students hesitate to apply Newton's second law in space due to the lack of a condition—the friction; (3) students feel that Newton's second law is not valid in space for a variety of reasons, but mostly for the fact that the body in space is not in contact with a surface. (paper)
Home; Journals; Resonance – Journal of Science Education; Volume 11; Issue 12. Newton's Contributions to Optics. Arvind Kumar. General Article Volume 11 Issue 12 December 2006 pp 10-20. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/011/12/0010-0020. Keywords.
Tilting-Twisting-Rolling: a pen-based technique for compass geometric construction
Fei LYU; Feng TIAN; Guozhong DAI; Hongan WANG
This paper presents a new pen-based technique,Tilting-Twisting-Rolling,to support compass geometric construction.By leveraging the 3D orientation information and 3D rotation information of a pen,this technique allows smooth pen action to complete multi-step geometric construction without switching task states.Results from a user study show this Tilting-Twisting-Rolling technique can improve user performance and user experience in compass geometric construction.
Can Newton's Third Law Be "Derived" from the Second?
Gangopadhyaya, Asim; Harrington, James
Newton's laws have engendered much discussion over several centuries. Today, the internet is awash with a plethora of information on this topic. We find many references to Newton's laws, often discussions of various types of misunderstandings and ways to explain them. Here we present an intriguing example that shows an assumption hidden in…
Particle confinement in penning traps an introduction
Vogel, Manuel
This book provides an introduction to the field of Penning traps and related experimental techniques. It serves both as a primer for those entering the field, and as a quick reference for those working in it. The book is motivated by the observation that often a vast number of different resources have to be explored to gain a good overview of Penning trap principles. This is especially true for students who experience additional difficulty due to the different styles of presentation and notation. This volume provides a broad introductory overview in unified notation.
Penning transfer in argon-based gas mixtures
Sahin, O; Tapan, I; Ozmutlu, E N
Penning transfers, a group of processes by which excitation energy is used to ionise the gas, increase the gas gain in some detectors. Both the probability that such transfers occur and the mechanism by which the transfer takes place, vary with the gas composition and pressure. With a view to developing a microscopic electron transport model that takes Penning transfers into account, we use this dependence to identify the transfer mechanisms at play. We do this for a number of argon-based gas mixtures, using gain curves from the literature.
A variational principle for Newton-Cartan theory
Goenner, H.F.M.
In the framework of a space-time theory of gravitation a variational principle is set up for the gravitational field equations and the equations of motion of matter. The general framework leads to Newton's equations of motion with an unspecified force term and, for irrotational motion, to a restriction on the propagation of the shear tensor along the streamlines of matter. The field equations obtained from the variation are weaker than the standard field equations of Newton-Cartan theory. An application to fluids with shear and bulk viscosity is given. (author)
Newton 1642-1727
Le plus célèbre des savants, Isaac Newton, est aussi celui qui a le plus de biographes. Avant même sa mort, en 1727, l'un d'eux publiait un récit de la vie du grand homme. Richard Westfall, universitaire américain, est aujourd'hui le meilleur connaisseur d'un personnage en tout point extraordinaire, dont Aldous Huxley disait : « En tant qu'homme, c'est un fiasco ; en tant que monstre, il est superbe ! » Découvrant à 24 ans la loi de la gravitation universelle, établissant peu après les lois de l'optique tout en poursuivant des études alchimiques et théologiques, cet homme capable de rester des jours entiers sans manger ni dormir, absorbé par les énigmes du savoir, connaît une grave dépression dont il réchappe de justesse... pour se consacrer à l'économie de son pays : il devient directeur de la Monnaie de Londres, organisant une impitoyable chasse aux faux-monnayeurs ! L'image d'Épinal de Newton regardant une pomme tomber sort enrichie et complexifiée de ce livre fruit d'une vie de reche...
The Celestial Mechanics of Newton
hannes Kepler had announced his first two laws of plan- etary motion (AD 1609), ... "Mathematical Principles of Natural Philosophy" .... He provided two different sets of proofs .... the Sun. Newton then formulated a theory of tides based on the.
Lewis, Prof. Gilbert Newton
Home; Fellowship. Fellow Profile. Elected: 1935 Honorary. Lewis, Prof. Gilbert Newton. Date of birth: 25 October 1875. Date of death: 24 March 1946. YouTube; Twitter; Facebook; Blog. Academy News. IAS Logo. 29th Mid-year meeting. Posted on 19 January 2018. The 29th Mid-year meeting of the Academy will be held ...
Astronomical and Cosmological Symbolism in Art Dedicated to Newton and Einstein
Sinclair, R.
Separated by two and a half centuries, Isaac Newton (1642-1727) and Albert Einstein (1879-1955) had profound impacts on our understanding of the universe. Newton established our understanding of universal gravitation, which was recast almost beyond recognition by Einstein. Both discovered basic patterns behind astronomical phenomena and became the best-known scientists of their respective periods. I will describe here how artists of the 18th and 20th centuries represented the achievements of Newton and Einstein. Representations of Newton express reverence, almost an apotheosis, portraying him as the creator of the universe. Einstein, in a different age, is represented often as a comic figure, and only rarely do we find art that hints at the profound view of the universe he developed.
Life after Newton: an ecological metaphysic.
Ulanowicz, R E
Ecology may indeed be 'deep', as some have maintained, but perhaps much of the mystery surrounding it owes more simply to the dissonance between ecological notions and the fundamentals of the modern synthesis. Comparison of the axioms supporting the Newtonian world view with those underlying the organicist and stochastic metaphors that motivate much of ecosystems science reveals strong disagreements--especially regarding the nature of the causes of events and the scalar domains over which these causes can operate. The late Karl Popper held that the causal closure forced by our mechanical perspective on nature frustrates our attempts to achieve an 'evolutionary theory of knowledge.' He suggested that the Newtonian concept of 'force' must be generalized to encompass the contingencies that arise in evolutionary processes. His reformulation of force as 'propensity' leads quite naturally to a generalization of Newton's laws for ecology. The revised tenets appear, however, to exhibit more scope and allow for change to arise from within a system. Although Newton's laws survive (albeit in altered form) within a coalescing ecological metaphysic, the axioms that Enlightenment thinkers appended to Newton's work seem ill-suited for ecology and perhaps should yield to a new and coherent set of assumptions on how to view the processes of nature.
Asymmetric Penning trap coherent states
Contreras-Astorga, Alonso; Fernandez, David J.
By using a matrix technique, which allows to identify directly the ladder operators, the coherent states of the asymmetric Penning trap are derived as eigenstates of the appropriate annihilation operators. They are compared with those obtained through the displacement operator method.
On Time-II: Newton's Time.
Raju, C. K.
A study of time in Newtonian physics is presented. Newton's laws of motion, falsifiability and physical theories, laws of motion and law of gravitation, and Laplace's demon are discussed. Short bibliographic sketches of Laplace and Karl Popper are included. (KR)
RESEARCHES REGARDING THE MAIN REPRODUCTION INDICATORS DETERMINATED IN SOWS, STAND GESTATION PEN TIPE
RAMONA UNTARU
Full Text Available Current researches were carried out with the goal to quantisize the lost from the weaning to early gestation at the sows housed in open pen gestation. In this trail we tested two pen types, different not only by size, but also by feeders' emplacement. The main reproduction indicators that we calculated until the 28 gestation day were the proportion of sows in heat after weaning, the weaning to estrus interval and the gestation rates. The weaning to estrus interval was about 4 to 7 days, most sows were in heat in the day 5 and 6 days after weaning. The percent of heat detection after weaning was 71.42% for the small pens and 70.71% for the big pens (differences statistically non significant, chi test value was 0.983. The gestation rate at 28 days after insemination was 91.62% for the small pens and 94.72% for the large pens (chi test value 0,959, statistically non significant differences. The overpopulation for heat induction and after that chipping animals together in those pens, show that the lost are up to 40.47%, between weaning – day 28 of gestation.
Dynamic-compliance and viscosity of PET and PEN
Weick, Brian L.
Complex dynamic-compliance and in-phase dynamic-viscosity data are presented and analyzed for PET and PEN advanced polyester substrates used for magnetic tapes. Frequency-temperature superposition is used to predict long-term behavior. Temperature and frequency ranges for the primary glass transition and secondary transitions are discussed and compared for PET and PEN. Shift factors from frequency-temperature superposition are used to determine activation energies for the transitions, and WLF parameters are determined for the polyester substrates.
Weick, Brian L. [School of Engineering and Computer Science, University of the Pacific, Stockton, California, 95211 (United States)
Space-charge effects in Penning ion traps
Porobić, T.; Beck, M.; Breitenfeldt, M.; Couratin, C.; Finlay, P.; Knecht, A.; Fabian, X.; Friedag, P.; Fléchard, X.; Liénard, E.; Ban, G.; Zákoucký, D.; Soti, G.; Van Gorp, S.; Weinheimer, Ch.; Wursten, E.; Severijns, N.
The influence of space-charge on ion cyclotron resonances and magnetron eigenfrequency in a gas-filled Penning ion trap has been investigated. Off-line measurements with K39+ using the cooling trap of the WITCH retardation spectrometer-based setup at ISOLDE/CERN were performed. Experimental ion cyclotron resonances were compared with ab initio Coulomb simulations and found to be in agreement. As an important systematic effect of the WITCH experiment, the magnetron eigenfrequency of the ion cloud was studied under increasing space-charge conditions. Finally, the helium buffer gas pressure in the Penning trap was determined by comparing experimental cooling rates with simulations.
Entropic corrections to Newton's law
Setare, M R; Momeni, D; Myrzakulov, R
In this short paper, we calculate separately the generalized uncertainty principle (GUP) and self-gravitational corrections to Newton's gravitational formula. We show that for a complete description of the GUP and self-gravity effects, both the temperature and entropy must be modified. (paper)
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Newton's Principia: Myth and Reality
Smith, George
Myths about Newton's Principia abound. Some of them, such as the myth that the whole book was initially developed using the calculus and then transformed into a geometric mathematics, stem from remarks he made during the priority controversy with Leibniz over the calculus. Some of the most persistent, and misleading, arose from failures to read the book with care. Among the latter are the myth that he devised his theory of gravity in order to explain the already established ``laws'' of Kepler, and that in doing so he took himself to be establishing that Keplerian motion is ``absolute,'' if not with respect to ``absolute space,'' then at least with respect to the fixed stars taken as what came later to be known as an inertial frame. The talk will replace these two myths with the reality of what Newton took himself to have established.
Newton Binomial Formulas in Schubert Calculus
Cordovez, Jorge; Gatto, Letterio; Santiago, Taise
We prove Newton's binomial formulas for Schubert Calculus to determine numbers of base point free linear series on the projective line with prescribed ramification divisor supported at given distinct points.
[The establishment of research inherit mode of famous academic thoughts].
Zhou, Xue-ping; Wu, Mian-hua; Guo, Wei-feng
To study and summarize the academic thoughts of famous Chinese medicine doctors is the main pathway of developing Chinese medicine theories. It is of important significance in enriching and developing the Chinese medicine theories by combining traditional and modern research methods, merging multiple disciples to study the research inherit mode of famous academic thoughts. The major study links include: (1) To refine scientific hypotheses from huge amount of clinical case records; (2) To find the literature sources; (3) To embody the practice significance of the innovative theories by clinical studies; (4) To reveal the scientific connotation of Chinese medicine theories by experimental studies. We hope to reach the goal of innovating and developing Chinese medicine theories on the basis of inheritance by integrating clinical case records, tracing the literature sources, clinical and experimental studies.
Three lectures on Newton's laws
Kokarev, Sergey S.
Three small lectures are devoted to three Newton's laws, lying in the foundation of classical mechanics. These laws are analyzed from the viewpoint of our contemporary knowledge about space, time and physical interactions. The lectures were delivered for students of YarGU in RSEC "Logos".
Influence of lucite phantoms on calibration of dosimetric pens
Oliveira, E.C.; Xavier, M.; Caldas, L.E.V.
Dosimetrical pens were studied for the answer repetition and were tested in gamma radiation fields ( 60 Co and 137 Cs) in air and in front of a lucite phantom, obtaining a backscattering contribution. The medium backscattering factors were 1,053 and 1,108 for respectively 60 Co and 137 Cs. The pens were placed behind the phantom for verifying the radiation attenuation. (C.G.C.)
A primeira Lei de Newton: uma abordagem didática
da Silva, Saulo Luis Lima
Resumo No estudo da mecânica Newtoniana o essencial é a compreensão das leis de Newton em profundidade. Se isso acontecer, ficará fácil perceber que todos os outros fenômenos a serem estudados são consequências dessas três leis básicas do movimento formuladas por Isaac Newton. Dentre elas, a primeira lei de Newton, conhecida como lei da Inércia, é a de maior complexidade filosófica e a menos compreendida pelos alunos ao saírem de um curso de física básica. Não é incomum encontrar alunos descr...
Phase II Practice-based Evidence in Nutrition (PEN) evaluation: interviews with key informants.
Bowden, Fran Martin; Lordly, Daphne; Thirsk, Jayne; Corby, Lynda
Dietitians of Canada has collaborated with experts in knowledge translation and transfer, technology, and dietetic practice to develop and implement an innovative online decision-support system called Practice-based Evidence in Nutrition (PEN). A study was conducted to evaluate the perceived facilitators and barriers that enable dietitians to use or prevent them from using PEN. As part of the overall evaluation framework of PEN, a qualitative descriptive research design was used to address the research purpose. Individual, semi-structured telephone interviews with 17 key informants were completed, and the interview transcripts underwent qualitative content analysis. Respondents identified several facilitators of and barriers to PEN use. Facilitators included specificity to dietetics, rigorous/expert review, easy accessibility, current content, credible/secure material, well-organized/easy-to-use material, material that is valuable to practice, and good value for money. Barriers included perceived high cost, fee structuring/cost to students, certain organizational aspects, and a perceived lack of training for pathway contributors. This formative evaluation has indicated areas in which PEN could be improved and strategies to make PEN the standard for dietetic education and practice. Ensuring that PEN is meeting users' knowledge needs is of the utmost importance if dietitians are to remain on the cutting edge of scientific inquiry.
Fast and Simple Forensic Red Pen Ink Analysis Using Ultra-Performance Liquid Chromatography (UPLC)
Lee, L.C.; Ying, S.L.; Wan Nur Syazwani Wan Mohamad Fuad; Ab Aziz Ishak; Khairul Osman
Ultra-performance liquid chromatography (UPLC) is more effective than high performance liquid chromatography in terms of analysis speed and sensitivity. This paper presents a feasibility study on forensic red pen inks analysis using UPLC. A total of 12 varieties of red ball point pen inks were purchased from selected stationary shop. For each variety, four different individual pens were sampled to provide intra-variability within a particular variety of pen. The proposed approach is very simple that it only involved limited analysis step and chemicals. A total of 144 chromatograms were obtained from red ink entries extracted with 1.5 mL 80 % (v/v) acetonitrile. Peaks originated from pen inks were determined by comparing the chromatograms of both blank paper and blank solvent against that of ink samples. Subsequently, one-way ANOVA was conducted to discriminate all 66 possible pairs for red pen inks. Results showed that the proposed approach giving discriminating power of 95.45 %. The outcome of the study indicates that UPLC could be a fast and simple approach to red ball point pen inks analysis. (author)
Newton law on the generalized singular brane with and without 4d induced gravity
Jung, Eylee; Kim, Sung-Hoon; Park, D.K.
Newton law arising due to the gravity localized on the general singular brane embedded in AdS 5 bulk is examined in the absence or presence of the 4d induced Einstein term. For the RS brane, apart from the subleading correction, Newton potential obeys 4d- and 5d-type gravitational law at long- and short-ranges if it were not for the induced Einstein term. The 4d induced Einstein term generates an intermediate range at short distance, in which the 5d Newton potential 1/r 2 emerges. For Neumann brane the long-range behavior of Newton potential is exponentially suppressed regardless of the existence of the induced Einstein term. For Dirichlet brane the expression of Newton potential is dependent on the renormalized coupling constant v ren . At particular value of v ren Newton potential on Dirichlet brane exhibits a similar behavior to that on RS brane. For other values the long-range behavior of Newton potential is exponentially suppressed as that in Neumann brane
Famous problems of geometry and how to solve them
Bold, Benjamin
Amateur puzzlists as well as students of mathematics and geometry will relish this rare opportunity to match wits with Archimedes, Euclid, Newton, Descartes, and other great mathematicians. Each chapter explores an individual type of geometric challenge, with commentary and practice problems, and reveals a milestone in the development of mathematics. Solutions.
Innovation & evaluation of tangible direct manipulation digital drawing pens for children.
Lee, Tai-Hua; Wu, Fong-Gong; Chen, Huei-Tsz
Focusing on the theme of direct manipulation, in this study, we proposed a new and innovative tangible user interface (TUI) design concept for a manipulative digital drawing pen. Based on interviews with focus groups brainstorming and experts and the results of a field survey, we selected the most suitable tangible user interface for children between 4 and 7 years of age. Using the new tangible user interface, children could choose between the brush tools after touching and feeling the various patterns. The thickness of the brush could be adjusted by changing the tilt angle. In a subsequent experimental process we compared the differences in performance and subjective user satisfaction. A total of sixteen children, aged 4-7 years participated in the experiment. Two operating system experiments (the new designed tangible digital drawing pen and traditional visual interface-icon-clicking digital drawing pens) were performed at random and in turns. We assessed their manipulation performance, accuracy, brush stroke richness and subjective evaluations. During the experimental process we found that operating functions using the direct manipulation method, and adding shapes and semantic models to explain the purpose of each function, enabled the children to perform stroke switches relatively smoothly. By using direct manipulation digital pens, the children could improve their stroke-switching performance for digital drawing. Additionally, by using various patterns to represent different brushes or tools, the children were able to make selections using their sense of touch, thereby reducing the time required to move along the drawing pens and select icons (The significant differences (p = 0.000, p drawing thick lines using the crayon function of the two (new and old) drawing pens (new 5.8750 drawing operations enhanced the drawing results, thereby increasing the children's enjoyment of drawing with tangible digital drawing pens. Copyright © 2016 Elsevier Ltd. All
Chemical composition of felt-tip pen inks.
Germinario, Giulia; Garrappa, Silvia; D'Ambrosio, Valeria; van der Werf, Inez Dorothé; Sabbatini, Luigia
Felt-tip pens are frequently used for the realization of sketches, drawings, architectural projects, and other technical designs. The formulations of these inks are usually rather complex and may be associated to those of modern paint materials where, next to the binding medium and pigments/dyes, solvents, fillers, emulsifiers, antioxidants, plasticizers, light stabilizers, biocides, and so on are commonly added. Felt-tip pen inks are extremely sensitive to degradation and especially exposure to light may cause chromatic changes and fading. In this study, we report on the complete chemical characterization of modern felt-tip pen inks that are commercially available and commonly used for the realization of artworks. Three brands of felt-tip pens (Faber-Castell, Edding, and Stabilo) were investigated with complementary analytical techniques such as thin-layer chromatography (TLC), VIS-reflectance spectroscopy, μ-Raman spectroscopy, surface-enhanced Raman spectroscopy (SERS), pyrolysis-gas chromatography-mass spectrometry (Py-GC-MS), GC-MS, and Fourier transform infrared (FTIR) spectroscopy. The use of TLC proved to be very powerful in the study of complex mixtures of synthetic dyes. First derivatives of the reflectance spectra acquired on the TLC spots were useful in the preliminary identification of the dye, followed by Raman spectroscopy and SERS, which allowed for the unambiguous determination of the chemical composition of the pigments (phthalocyanines, dioxazines, and azo pigments) and dyes (azo dyes, triarylmethanes, xanthenes). FTIR spectroscopy was used especially for the detection of additives, as well as for confirming the nature of solvents and dyes/pigments. Finally, (Py-)GC-MS data provided information on the binders (styrene-acrylic resins, plant gums), solvents, and additives, as well as on pigments and dyes.
XMM-Newton operations beyond the design lifetime
Parmar, Arvind N.; Kirsch, Marcus G. F.; Muñoz, J. Ramon; Santos-Lleo, Maria; Schartel, Norbert
After more than twelve years in orbit and two years beyond the design lifetime, XMM-Newton continues its near faultless operations providing the worldwide astronomical community with an unprecedented combination of imaging and spectroscopic X-ray capabilities together with simultaneous optical and ultra-violet monitoring. The interest from the scientific community in observing with XMM-Newton remains extremely high with the last annual Announcement of Observing Opportunity (AO-11) attracting proposals requesting 6.7 times more observing time than was available. Following recovery from a communications problem in 2008, all elements of the mission are stable and largely trouble free. The operational lifetime if currently limited by the amount of available hydrazine fuel. XMM-Newton normally uses reaction wheels for attitude control and fuel is only used when offsetting reaction wheel speed away from limiting values and for emergency Sun acquisition following an anomaly. Currently, the hydrazine is predicted to last until around 2020. However, ESA is investigating the possibility of making changes to the operations concept and the onboard software that would enable lower fuel consumption. This could allow operations to well beyond 2026.
Newton's Telescope in Print: the Role of Images in the Reception of Newton's Instrument
Dupré, Sven
While Newton tried to make his telescope into a proof of the supremacy of his theory of colours over older theories, his instrument was welcomed as a way to shorten telescopes, not as a way to solve the problem of chromatic aberration. This paper argues that the image published together with the
A redesigned follitropin alfa pen injector for infertility: results of a market research study
Abbotts C
Full Text Available Carole Abbotts1, Cristiana Salgado-Braga2, Céline Audibert-Gros31Pharmaceutical Marketing Research Consultancy, London, UK; 2Fertility and Endocrinology Global Business Unit, 3Business Intelligence, Merck Serono SA, Geneva, SwitzerlandBackground: The purpose of this study was to evaluate patient-learning and nurse-teaching experiences when using a redesigned prefilled, ready-to-use follitropin alfa pen injector.Methods: Seventy-three UK women of reproductive age either administering daily treatment with self-injectable gonadotropins or about to start gonadotropin treatment for infertility (aged 24–47 years; 53 self-injection-experienced and 20 self-injection-naïve and 28 nurses from UK infertility clinics were recruited for the study. Following instruction, patients and nurses used the redesigned follitropin alfa pen to inject water into an orange and completed questionnaires to evaluate their experiences with the pen immediately after the simulated injections.Results: Most (88%, n = 64 patients found it easy to learn how to use the pen. Among injection-experienced patients, 66% (n = 35 agreed that the redesigned pen was easier to learn to use compared with their current method and 70% (n = 37 also said they would prefer its use over current devices for all injectable fertility medications. All nurses considered the redesigned pen easy to learn and believed it would be easy to teach patients how to use. Eighty-six percent (n = 24 of the nurses thought it was easy to teach patients to determine the remaining dose to be dialed and injected in a second pen if the initial dose was incomplete. Compared with other injection devices, 96% (n = 27 thought it was "much easier" to "as easy" to teach patients to use the redesigned pen. Based on ease of teaching, 68% (n = 19 of nurses would choose to teach the pen in preference to any other injection method. Almost all (93%, n = 26 nurses considered that having the same pen format for a range of
Update on insulin treatment for dogs and cats: insulin dosing pens and more
Thompson A
Full Text Available Ann Thompson,1 Patty Lathan,2 Linda Fleeman3 1School of Veterinary Science, The University of Queensland, Gatton, QLD, Australia; 2College of Veterinary Medicine Mississippi State University, Starkville, MS, USA; 3Animal Diabetes Australia, Melbourne, VIC, Australia Abstract: Insulin therapy is still the primary therapy for all diabetic dogs and cats. Several insulin options are available for each species, including veterinary registered products and human insulin preparations. The insulin chosen depends on the individual patient's requirements. Intermediate-acting insulin is usually the first choice for dogs, and longer-acting insulin is the first choice for cats. Once the insulin type is chosen, the best method of insulin administration should be considered. Traditionally, insulin vials and syringes have been used, but insulin pen devices have recently entered the veterinary market. Pens have different handling requirements when compared with standard insulin vials including: storage out of the refrigerator for some insulin preparations once pen cartridges are in use; priming of the pen to ensure a full dose of insulin is administered; and holding the pen device in place for several seconds during the injection. Many different types of pen devices are available, with features such as half-unit dosing, large dials for visually impaired people, and memory that can display the last time and dose of insulin administered. Insulin pens come in both reusable and disposable options. Pens have several benefits over syringes, including improved dose accuracy, especially for low insulin doses. Keywords: diabetes, mellitus, canine, feline, NPH, glargine, porcine lente
The frictional Schroedinger-Newton equation in models of wave function collapse
Diosi, Lajos [Research Institute for Particle and Nuclear Physics, H-1525 Budapest 114, PO Box 49 (Hungary)
Replacing the Newtonian coupling G by -iG, the Schroedinger--Newton equation becomes {sup f}rictional{sup .} Instead of the reversible Schroedinger-Newton equation, we advocate its frictional version to generate the set of pointer states for macroscopic quantum bodies.
Hiesmayr, Beatrix C
About 50 years ago John St. Bell published his famous Bell theorem that initiated a new field in physics. This contribution discusses how discrete symmetries relate to the big open questions of quantum mechanics, in particular:(i) how correlations stronger than those predicted by theories sharing randomness (Bell's theorem) relate to the violation of the CP symmetry and the P symmetry; and its relation to the security of quantum cryptography,(ii) how the measurement problem ("why do we observe no tables in superposition?�) can be polled in weakly decaying systems,(iii) how strongly and weakly interacting quantum systems are affected by Newton's self gravitation.These presented preliminary results show that the meson-antimeson systems and the hyperon- antihyperon systems are a unique laboratory to tackle deep fundamental questions and to contribute to the understand what impact the violation of discrete symmetries has. (paper)
Dynamic monitoring of weight data at the pen vs at the individual level
Jensen, Dan; Toft, Nils; Kristensen, A. R. K.
recorded weight data from finisher pigs. Data are collected at insertion and at the exit of the first pigs in the pen, and in few pens, the weight is recorded weekly. Dynamic linear models are fitted on the weight data, at the pig level (univariate), at the double pen level using averaged weight...... (univariate) and using individual pig values as parameters in a hierarchical (multivariate) model including section, double pen, and individual level. Variance components of the different models are estimated using the Expectation Maximization algorithm. The difference of information obtained...... at the individual vs pen level is thereafter assessed. Whereas weight data is usually monitored after a batch is being sent to the slaughter house, this method provides weekly updating of the data. Perspectives of application include dynamic monitoring of weight data in relation to events such as diarrhoea, tail...
Jensen, Dan Børge; Toft, Nils; Kristensen, Anders Ringgaard
recorded weight data from finisher pigs. Data are collected at insertion and at the exit of the first pigs in the pen, and in few pens, the weight is recorded weekly. Dynamic linear models are fitted on the weight data, at the pig level (univariate), at the double pen level using averaged weight...... (univariate) and using individual pig values as parameters in a hierarchical (multivariate) model including section, double pen, and individual level. Variance components of the different models are estimated using the Expectation Maximization algorithm. The difference of information obtained...... at the individual vs. pen level is thereafter assessed. Whereas weight data is usually monitored after a batch is being sent to the slaughter house, this method provides with weekly updating of the data. Perspectives of application include dynamic monitoring of weight data in relation to events such as diarrhoea...
A Survey of Pen name semantic Applications in Rumis Sonnets (Ghazals
Zohre AhmadiPoor anari
Full Text Available Abstract The pen name in sonnet is the poet's poetic name which most of the poets mention it in their verses. Jal�l ad-Dīn Muhammad Balkhī also known as Jal�l ad-Dīn Rumi lived in 13th-century was a Persian Moslem poet, theologian, and Sufi mystic. He has written more than 3229 sonnets and dedicated to Shams Tabrizi. Thus mentioned, names such "Shams�, "Shams od-Din� and "Shams al-Haq� in the ending lines of his sonnets. One of the points which could be studied about pen name is study of theme or concepts which are mentioned alongside that. Entirely it has been said that the same theme which comes with the pen name "Shams� in 992 sonnets. In this study, we pay attention to mentioning the poets desired name which is not necessarily the pen name in Rumis' sonnets, what theme does it carry and what is relationship of it with the previous lines? Themes which the poets apply in their sonnets beside pen name is mostly what that has been mentioned in the previous lines. However, in times the concept mentioned along side with the pen name is independent from the sonnet concepts, mostly eulogy. Studying Hafiz and Saadi sonnets shows that the most important themes existing are: love declaration, advice, eulogizing and sometimes a mischievous concept. Rumis' sonnets are lover-based. Therefore, there is much talk of the lover in the whole sonnet. But in other poet's sonnets, the lover (the poet is the main theme is the sonnet. The poet may find a way to praise his own poem or stays in his dreamy world and focuses on the romantic feelings. Considering the fact that unlike other poets Rumi has not mentioned his own pen name but his lover "Shams�, the study focuses on the themes which are mentioned by the pen name "Shams� as the following: 1-Eulogy: One third of the Shams pen names are eulogies. The sufist approach has given the lines a special color. The similes and metaphors used for him are heavenly and
Disk-galaxy density distribution from orbital speeds using Newton's law
Nicholson, Kenneth F.
Given the dimensions (including thickness) of an axisymmetric galaxy, Newton's law is used in integral form to find the density distributions required to match a wide range of orbital speed profiles. Newton's law is not modified and no dark matter halos are required. The speed distributiions can have extreme shapes if they are reasonably smooth. Several examples are given.
N=2 superconformal Newton-Hooke algebra and many-body mechanics
A representation of the conformal Newton-Hooke algebra on a phase space of n particles in arbitrary dimension which interact with one another via a generic conformal potential and experience a universal cosmological repulsion or attraction is constructed. The minimal N=2 superconformal extension of the Newton-Hooke algebra and its dynamical realization in many-body mechanics are studied.
On the Shoulders of Sir Isaac Newton and Arthur Storer
Martin, Helen E.; Evans-Gondo, Bonita
Helen E. Martin, the author of this article, is a retired National Board Certified Teacher who has been researching Sir Isaac Newton's unpublished manuscripts for over three decades. While researching the work of Newton, a teacher she was mentoring asked for some hands-on activities to study planetary motion. The description of the activity…
Laboratory Test of Newton's Second Law for Small Accelerations
Gundlach, J. H.; Schlamminger, S.; Spitzer, C. D.; Choi, K.-Y.; Woodahl, B. A.; Coy, J. J.; Fischbach, E.
We have tested the proportionality of force and acceleration in Newton's second law, F=ma, in the limit of small forces and accelerations. Our tests reach well below the acceleration scales relevant to understanding several current astrophysical puzzles such as the flatness of galactic rotation curves, the Pioneer anomaly, and the Hubble acceleration. We find good agreement with Newton's second law at accelerations as small as 5x10 -14 m/s 2
The Possible Role of Penning Ionization Processes in Planetary Atmospheres
Stefano Falcinelli
Full Text Available In this paper we suggest Penning ionization as an important route of formation for ionic species in upper planetary atmospheres. Our goal is to provide relevant tools to researchers working on kinetic models of atmospheric interest, in order to include Penning ionizations in their calculations as fast processes promoting reactions that cannot be neglected. Ions are extremely important for the transmission of radio and satellite signals, and they govern the chemistry of planetary ionospheres. Molecular ions have also been detected in comet tails. In this paper recent experimental results concerning production of simple ionic species of atmospheric interest are presented and discussed. Such results concern the formation of free ions in collisional ionization of H2O, H2S, and NH3 induced by highly excited species (Penning ionization as metastable noble gas atoms. The effect of Penning ionization still has not been considered in the modeling of terrestrial and extraterrestrial objects so far, even, though metastable helium is formed by radiative recombination of He+ ions with electrons. Because helium is the second most abundant element of the universe, Penning ionization of atomic or molecular species by He*(23S1 is plausibly an active route of ionization in relatively dense environments exposed to cosmic rays.
Does the Newton's world model revive
Meszaros, A.
Newton's world model may have a physical meaning if the gravitation has small non-zero mass and if the observable part of the universe is the interior of a giant finite body. Both possibilities are allowed theoretically. (author)
An experimental test of Newton's law of gravitation for small accelerations
Schubert, Sven
The experiment presented in this thesis has been designed to test Newton's law of gravitation in the limit of small accelerations caused by weak gravitational forces. It is located at DESY, Hamburg, and is a modification of an experiment that was carried out in Wuppertal, Germany, until 2002 in order to measure the gravitational constant G. The idea of testing Newton's law in the case of small accelerations emerged from the question whether the flat rotation curves of spiral galaxies can be traced back to Dark Matter or to a law of gravitation that deviates from Newton on cosmic scales like e.g. MOND (Modified Newtonian Dynamics). The core of this experiment is a microwave resonator which is formed by two spherical concave mirrors that are suspended as pendulums. Masses between 1 and 9 kg symmetrically change their distance to the mirrors from far to near positions. Due to the increased gravitational force the mirrors are pulled apart and the length of the resonator increases. This causes a shift of the resonance frequency which can be translated into a shift of the mirror distance. The small masses are sources of weak gravitational forces and cause accelerations on the mirrors of about 10 -10 m/s 2 . These forces are comparable to those between stars on cosmic scales and the accelerations are in the vicinity of the characteristic acceleration of MOND a 0 ∼ 1.2.10 -10 m/s 2 , where deviations from Newton's law are expected. Thus Newton's law could be directly checked for correctness under these conditions. First measurements show that due to the sensitivity of this experiment many systematic influences have to be accounted for in order to get consistent results. Newton's law has been confirmed with an accuracy of 3%. MOND has also been checked. In order to be able to distinguish Newton from MOND with other interpolation functions the accuracy of the experiment has to be improved. (orig.)
Analysis of the NovoTwist pen needle in comparison with conventional screw-thread needles.
Aye, Tandy
Administration of insulin via a pen device may be advantageous over a vial and syringe system. Hofman and colleagues introduce a new insulin pen needle, the NovoTwist, to simplify injections to a small group of children and adolescents. Their overall preferences and evaluation of the handling of the needle are reported in the study. This new needle has the potential to ease administration of insulin via a pen device that may increase both the use of a pen device and adherence to insulin therapy. © 2011 Diabetes Technology Society.
What are the Hidden Quantum Processes Behind Newton's Laws?
Ostoma, Tom; Trushyk, Mike
We investigate the hidden quantum processes that are responsible for Newton's laws of motion and Newton's universal law of gravity. We apply Electro-Magnetic Quantum Gravity or EMQG to investigate Newtonian classical physics. EQMG is a quantum gravity theory that is manifestly compatible with Cellular Automata (CA) theory, a new paradigm for physical reality. EMQG is also based on a theory of inertia proposed by R. Haisch, A. Rueda, and H. Puthoff, which we modified and called Quantum Inertia...
Increasing viscosity and inertia using a robotically-controlled pen improves handwriting in children
Ben-Pazi, Hilla; Ishihara, Abraham; Kukke, Sahana; Sanger, Terence D
The aim of this study was to determine the effect of mechanical properties of the pen on the quality of handwriting in children. Twenty two school aged children, ages 8–14 years wrote in cursive using a pen attached to a robot. The robot was programmed to increase the effective weight (inertia) and stiffness (viscosity) of the pen. Speed, frequency, variability, and quality of the two handwriting samples were compared. Increased inertia and viscosity improved handwriting quality in 85% of children (pHandwriting quality did not correlate with changes in speed, suggesting that improvement was not due to reduced speed. Measures of movement variability remained unchanged, suggesting that improvement was not due to mechanical smoothing of pen movement by the robot. Since improvement was not explained by reduced speed or mechanical smoothing, we conclude that children alter handwriting movements in response to pen mechanics. Altered movement could be caused by changes in proprioceptive sensory feedback. PMID:19794098
Discovery Science: Newton All around You.
Prigo, Robert; Humphrey, Gregg
Presents activities for helping elementary students learn about Newton's third law of motion. Several activity cards demonstrate the concept of the law of action and reaction. The activities require only inexpensive materials that can be found around the house. (SM)
Penning ionization cross sections of excited rare gas atoms
Ukai, Masatoshi; Hatano, Yoshihiko.
Electronic energy transfer processes involving excited rare gas atoms play one of the most important roles in ionized gas phenomena. Penning ionization is one of the well known electronic energy transfer processes and has been studied extensively both experimentally and theoretically. The present paper reports the deexcitation (Penning ionization) cross sections of metastable state helium He(2 3 S) and radiative He(2 1 P) atoms in collision with atoms and molecules, which have recently been obtained by the authors' group by using a pulse radiolysis method. Investigation is made of the selected deexcitation cross sections of He(2 3 S) by atoms and molecules in the thermal collisional energy region. Results indicate that the cross sections are strongly dependent on the target molecule. The deexcitation probability of He(2 3 S) per collision increases with the excess electronic energy of He(2 3 S) above the ionization potential of the target atom or molecule. Another investigation, made on the deexcitation of He(2 1 P), suggests that the deexcitation cross section for He(2 1 P) by Ar is determined mainly by the Penning ionization cross section due to a dipole-dipole interaction. Penning ionization due to the dipole-dipole interaction is also important for deexcitation of He(2 1 P) by the target molecules examined. (N.K.)
Fundamentos kantianos dos axiomas do movimento de Newton
Vieira Coutinho Abreu Gomes, �rio
Esse trabalho se insere na perspectiva fundacionista kantiana, particularmente no que diz respeito às três leis de Newton. Em sua obra de 1786, Princípios Metafísicos da Ciência da Natureza, Kant empreende a tarefa de fundamentar a física mecânica através de princípios metafísicos. Nosso objetivo nessa dissertação foi abordar essa obra especificamente em seu terceiro capítulo onde Kant trata dos axiomas do movimento de Newton. Nessa dissertação elucidamos a argumentação kantiana na fundamenta...
Vollmer, M.
The cooling of objects is often described by a law, attributed to Newton, which states that the temperature difference of a cooling body with respect to the surroundings decreases exponentially with time. Such behaviour has been observed for many laboratory experiments, which led to a wide acceptance of this approach. However, the heat transfer…
( sup 3 H)(D-PEN sup 2 , D-PEN sup 5 ) enkephalin binding to delta opioid receptors on intact neuroblastoma-glioma (NG 108-15) hybrid cells
Knapp, R.J.; Yamamura, H.I. (Univ. of Arizona College of Medicine, Tucson (USA))
({sup 3}H)(D-Pen{sup 2}, D-Pen{sup 5})enkephalin binding to intact NG 108-15 cells has been measured under physiological conditions of temperature and medium. The dissociation constant, receptor density, and Hill slope values measured under these conditions are consistent with values obtained by others using membranes prepared from these cells. Kinetic analysis of the radioligand binding to these cells show biphasic association and monophasic dissociation processes suggesting the presence of different receptor affinity states for the agonist. The data show that the binding affinity of ({sup 3}H)(D-Pen{sup 2}, D-Pen{sup 5})enkephalin under physiological conditions is not substantially different to that measured in 50 mM Tris buffer using cell membrane fractions. Unlike DPDPE, the {mu} opioid agonists morphine, normorphine, PL-17, and DAMGO, have much lower affinity for the {delta} receptor measured under these conditions than is observed by studies using 50 mM Tris buffer. The results described here suggest that this assay may serve as a useful model of {delta} opioid receptor binding in vivo.
On-the-fly XMM-Newton Spacecraft Data Reduction on the Grid
A. Ibarra
Full Text Available We present the results of the first prototype of a XMM-Newton pipeline processing task, parallelized at a CCD level, which can be run in a Grid system. By using the Grid Way application and the XMM-Newton Science Archive system, the processing of the XMM-Newton data is distributed across the Virtual Organization (VO constituted by three different research centres: ESAC (European Space Astronomy Centre, ESTEC (the European Space research and TEchnology Centre and UCM (Complutense University of Madrid. The proposed application workflow adjusts well to the Grid environment, making use of the massive parallel resources in a flexible and adaptive fashion.
Isaac Newton learns Hebrew: Samuel Johnson's Nova cubi Hebræi tabella
Joalland, Michael; Mandelbrote, Scott
This article concerns the earliest evidence for Isaac Newton's use of Hebrew: a manuscript copy by Newton of part of a work intended to provide a reader of the Hebrew alphabet with the ability to identify or memorize more than 1000 words and to begin to master the conjugations of the Hebrew verb. In describing the content of this unpublished manuscript and establishing its source and original author for the first time, we suggest how and when Newton may have initially become acquainted with the language. Finally, basing our discussion in part on an examination of the reading marks that Newton left in the surviving copies of Hebrew grammars and lexicons that he owned, we will argue that his interest in Hebrew was not intended to achieve linguistic proficiency but remained limited to particular theological queries of singular concern.
The architecture of Newton, a general-purpose dynamics simulator
Cremer, James F.; Stewart, A. James
The architecture for Newton, a general-purpose system for simulating the dynamics of complex physical objects, is described. The system automatically formulates and analyzes equations of motion, and performs automatic modification of this system equations when necessitated by changes in kinematic relationships between objects. Impact and temporary contact are handled, although only using simple models. User-directed influence of simulations is achieved using Newton's module, which can be used to experiment with the control of many-degree-of-freedom articulated objects.
Classical mechanics from Newton to Einstein : a modern introduction
McCall, Martin
This new edition of Classical Mechanics, aimed at undergraduate physics and engineering students, presents in a user-friendly style an authoritative approach to the complementary subjects of classical mechanics and relativity. Â The text starts with a careful look at Newton's Laws, before applying them in one dimension to oscillations and collisions. More advanced applications - including gravitational orbits and rigid body dynamics - are discussed after the limitations of Newton's inertial frames have been highlighted through an exposition of Einstein's Special Relativity. Examples gi
A Line Search Multilevel Truncated Newton Algorithm for Computing the Optical Flow
Lluís Garrido
Full Text Available We describe the implementation details and give the experimental results of three optimization algorithms for dense optical flow computation. In particular, using a line search strategy, we evaluate the performance of the unilevel truncated Newton method (LSTN, a multiresolution truncated Newton (MR/LSTN and a full multigrid truncated Newton (FMG/LSTN. We use three image sequences and four models of optical flow for performance evaluation. The FMG/LSTN algorithm is shown to lead to better optical flow estimation with less computational work than both the LSTN and MR/LSTN algorithms.
Pen-Enabled, Real-Time Student Engagement for Teaching in STEM Subjects
Urban, Sylvia
The introduction of pen-enabling devices has been demonstrated to increase a student's ability to solve problems, communicate, and learn during note taking. For the science, technology, engineering, and mathematics subjects that are considered to be symbolic in nature, pen interfaces are better suited for visual-spatial content and also provide a…
Easter eggs, myths and jokes in famous physics books and papers
Fortunato, Lorenzo
I will report below on a few examples of raving and insane (or maybe utterly genial) sentences that can be found in famous and otherwise admirable books of physics, because I genuinely believe it is amusing.
On Finding the Source of Human Energy: The Influence of Famous Quotations on Willpower.
Alcoba, Jesús; López, Laura
Positive psychology focuses on aspects that human beings can improve, thereby enhancing their growth and happiness. One of these aspects is willpower, a quality that has been demonstrated to have various benefits on people, as widely shown in the literature. As a result, a growing body of research is attempting to establish the conditions under which an individual's willpower can be increased. This work attempts to confirm whether the famous quotations that people often use to inspire or motivate themselves can have a real effect on willpower. Two experiments were conducted assigning randomly subjects to a group and priming them with famous quotations, and afterwards comparing their performance in a willpower task with a control group. The second experiment added a willpower depletion task before priming. As a result, primed subjects endured the willpower task significantly more time than control group, demonstrating that famous quotations related to willpower help to increase this capacity and to counteract the effect of willpower depletion.
Variational nature, integration, and properties of Newton reaction path.
Bofill, Josep Maria; Quapp, Wolfgang
The distinguished coordinate path and the reduced gradient following path or its equivalent formulation, the Newton trajectory, are analyzed and unified using the theory of calculus of variations. It is shown that their minimum character is related to the fact that the curve is located in a valley region. In this case, we say that the Newton trajectory is a reaction path with the category of minimum energy path. In addition to these findings a Runge-Kutta-Fehlberg algorithm to integrate these curves is also proposed.
Variational nature, integration, and properties of Newton reaction path
Public debate on the Penly 3 project. Construction of an electronuclear production unit of the Penly site (Seine-Maritime)
After a presentation of the objectives of the Penly 3 project, this report gives an overview of the context of electricity production (increasing world demand, geographically unbalanced energy reserves with fluctuating prices and a tendency to increase, French energy assessment, electricity peculiarities, electricity production and consumption in France in 2009, climate change issue). It presents the Penly 3 project and its alternatives within the frame of the French environment and energy policy. The project is then presented in terms of safety objectives, of design choices, of environmental improvements (water sampling, thermal, chemical and radioactive releases, wastes, sound and visual impact, foreseen cost and financing), and then in terms of socio-economical impact. The main steps of the project are briefly indicated
Product Plan of New Generation System Camera "OLYMPUS PEN E-P1"
Ogawa, Haruo
"OLYMPUS PEN E-P1", which is new generation system camera, is the first product of Olympus which is new standard "Micro Four-thirds System" for high-resolution mirror-less cameras. It continues good sales by the concept of "small and stylish design, easy operation and SLR image quality" since release on July 3, 2009. On the other hand, the half-size film camera "OLYMPUS PEN" was popular by the concept "small and stylish design and original mechanism" since the first product in 1959 and recorded sale number more than 17 million with 17 models. By the 50th anniversary topic and emotional value of the Olympus pen, Olympus pen E-P1 became big sales. I would like to explain the way of thinking of the product plan that included not only the simple functional value but also emotional value on planning the first product of "Micro Four-thirds System".
Newton law in DGP brane-world with semi-infinite extra dimension
Park, D.K.; Tamaryan, S.; Miao Yangang
Newton potential for DGP brane-world scenario is examined when the extra dimension is semi-infinite. The final form of the potential involves a self-adjoint extension parameter α, which plays a role of an additional mass (or distance) scale. The striking feature of Newton potential in this setup is that the potential behaves as seven-dimensional in long range when α is non-zero. For small α there is an intermediate range where the potential is five-dimensional. Five-dimensional Newton constant decreases with increase of α from zero. In the short range the four-dimensional behavior is recovered. The physical implication of this result is discussed in the context of the accelerating behavior of universe
Pen harvester for powering a pulse rate sensor
Bedekar, Vishwas; Oliver, Josiah; Priya, Shashank
Rapid developments in the area of micro-sensors for various applications such as structural health monitoring, bio-chemical sensors and pressure sensors have increased the demand for portable, low cost, high efficiency energy harvesting devices. In this paper, we describe the scheme for powering a pulse rate sensor with a vibration energy harvester integrated inside a pen commonly carried by humans in the pocket close to the heart. Electromagnetic energy harvesting was selected in order to achieve high power at lower frequencies. The prototype pen harvester was found to generate 3 mW at 5 Hz and 1 mW at 3.5 Hz operating under displacement amplitude of 16 mm (corresponding to an acceleration of approximately 1.14 g rms at 5 Hz and 0.56 g rms at 3.5 Hz, respectively). A comprehensive mathematical modelling and simulations were performed in order to optimize the performance of the vibration energy harvester. The integrated pen harvester prototype was found to generate continuous power of 0.46-0.66 mW under normal human actions such as jogging and jumping which is enough for a small scale pulse rate sensor.
Newton's constant from a minimal length: additional models
Sahlmann, Hanno
We follow arguments of Verlinde (2010 arXiv:1001.0785 [hep-th]) and Klinkhamer (2010 arXiv:1006.2094 [hep-th]), and construct two models of the microscopic theory of a holographic screen that allow for the thermodynamical derivation of Newton's law, with Newton's constant expressed in terms of a minimal length scale l contained in the area spectrum of the microscopic theory. One of the models is loosely related to the quantum structure of surfaces and isolated horizons in loop quantum gravity. Our investigation shows that the conclusions reached by Klinkhamer regarding the new length scale l seem to be generic in all their qualitative aspects.
A Non-smooth Newton Method for Multibody Dynamics
Erleben, K.; Ortiz, R.
In this paper we deal with the simulation of rigid bodies. Rigid body dynamics have become very important for simulating rigid body motion in interactive applications, such as computer games or virtual reality. We present a novel way of computing contact forces using a Newton method. The contact problem is reformulated as a system of non-linear and non-smooth equations, and we solve this system using a non-smooth version of Newton's method. One of the main contribution of this paper is the reformulation of the complementarity problems, used to model impacts, as a system of equations that can be solved using traditional methods.
Teaching Newton's Third Law of Motion in the Presence of Student Preconception
Poon, C. H.
The concept of interaction that underlies Newton's Laws of Motion is compared with the students' commonsense ideas of force and motion. An approach to teaching Newton's Third Law of Motion is suggested that focuses on refining the student's intuitive thinking on the nature of interaction.
POEMS in Newton's Aerodynamic Frustum
Sampedro, Jaime Cruz; Tetlalmatzi-Montiel, Margarita
The golden mean is often naively seen as a sign of optimal beauty but rarely does it arise as the solution of a true optimization problem. In this article we present such a problem, demonstrating a close relationship between the golden mean and a special case of Newton's aerodynamical problem for the frustum of a cone. Then, we exhibit a parallel…
9 CFR 89.5 - Feeding pens.
... 9 Animals and Animal Products 1 2010-01-01 2010-01-01 false Feeding pens. 89.5 Section 89.5 Animals and Animal Products ANIMAL AND PLANT HEALTH INSPECTION SERVICE, DEPARTMENT OF AGRICULTURE INTERSTATE TRANSPORTATION OF ANIMALS (INCLUDING POULTRY) AND ANIMAL PRODUCTS STATEMENT OF POLICY UNDER THE...
Ballpoint pen ingestion in a 2-year-old child.
Rameau, Anaïs; Anand, Sumeet M; Nguyen, Lily H
A 2-year-old girl ingested a ballpoint pen, which was found on chest x-ray to have lodged in the lower esophagus and stomach. The pen, which measured nearly 15 cm in length, was removed via rigid esophagoscopy without complication. To the best of our knowledge, this is the longest nonflexible foreign body ingested by a young child ever reported in the English-language literature. We describe the presentation of this case and the current guidelines for safety as enumerated in the Small Parts Regulations established by the U.S. Consumer Product Safety Commission.
Local Convergence and Radius of Convergence for Modified Newton Method
Măruşter Ştefan
Full Text Available We investigate the local convergence of modified Newton method, i.e., the classical Newton method in which the derivative is periodically re-evaluated. Based on the convergence properties of Picard iteration for demicontractive mappings, we give an algorithm to estimate the local radius of convergence for considered method. Numerical experiments show that the proposed algorithm gives estimated radii which are very close to or even equal with the best ones.
Visual imagery of famous faces: effects of memory and attention revealed by fMRI.
Ishai, Alumit; Haxby, James V; Ungerleider, Leslie G
Complex pictorial information can be represented and retrieved from memory as mental visual images. Functional brain imaging studies have shown that visual perception and visual imagery share common neural substrates. The type of memory (short- or long-term) that mediates the generation of mental images, however, has not been addressed previously. The purpose of this study was to investigate the neural correlates underlying imagery generated from short- and long-term memory (STM and LTM). We used famous faces to localize the visual response during perception and to compare the responses during visual imagery generated from STM (subjects memorized specific pictures of celebrities before the imagery task) and imagery from LTM (subjects imagined famous faces without seeing specific pictures during the experimental session). We found that visual perception of famous faces activated the inferior occipital gyri, lateral fusiform gyri, the superior temporal sulcus, and the amygdala. Small subsets of these face-selective regions were activated during imagery. Additionally, visual imagery of famous faces activated a network of regions composed of bilateral calcarine, hippocampus, precuneus, intraparietal sulcus (IPS), and the inferior frontal gyrus (IFG). In all these regions, imagery generated from STM evoked more activation than imagery from LTM. Regardless of memory type, focusing attention on features of the imagined faces (e.g., eyes, lips, or nose) resulted in increased activation in the right IPS and right IFG. Our results suggest differential effects of memory and attention during the generation and maintenance of mental images of faces.
Nonlinear PIC simulation in a Penning trap
Lapenta, G.; Delzanno, G.L.; Finn, J. M.
We study the nonlinear dynamics of a Penning trap plasma, including the effect of the finite length and end curvature of the plasma column. A new cylindrical PIC code, called KANDINSKY, has been implemented by using a new interpolation scheme. The principal idea is to calculate the volume of each cell from a particle volume, in the same manner as it is done for the cell charge. With this new method, the density is conserved along streamlines and artificial sources of compressibility are avoided. The code has been validated with a reference Eulerian fluid code. We compare the dynamics of three different models: a model with compression effects, the standard Euler model and a geophysical fluid dynamics model. The results of our investigation prove that Penning traps can really be used to simulate geophysical fluids
How Two Differing Portraits of Newton Can Teach Us about the Cultural Context of Science
Tucci, Pasquale
Like several scientists, Isaac Newton has been represented many times over many different periods, and portraits of Newton were often commissioned by the scientist himself. These portraits tell us a lot about the scientist, the artist and the cultural context. This article examines two very different portraits of Newton that were realized more…
Isaac Newton's scientific method turning data into evidence about gravity and cosmology
Harper, William L.
Isaac Newton's Scientific Method examines Newton's argument for universal gravity and his application of it to resolve the problem of deciding between geocentric and heliocentric world systems by measuring masses of the sun and planets. William L. Harper suggests that Newton's inferences from phenomena realize an ideal of empirical success that is richer than prediction. Any theory that can achieve this rich sort of empirical success must not only be able to predict the phenomena it purports to explain, but also have those phenomena accurately measure the parameters which explain them. Harper explores the ways in which Newton's method aims to turn theoretical questions into ones which can be answered empirically by measurement from phenomena, and to establish that propositions inferred from phenomena are provisionally accepted as guides to further research. This methodology, guided by its rich ideal of empirical success, supports a conception of scientific progress that does not require construing it as progr...
Fulltext PDF
matical talent brought him close to Newton. In 1697, he was elected a member of the Royal Society. In. 1722, he proposed the famous theorem which bears his name, but never publish~d it. He was one of the members of the commission set up by. Royal Society for setlling the well-known priority dispute between Newton.
A gravitação universal na filosofia da natureza de Isaac Newton
Garcia, Valdinei Gomes
Resumo: Esta pesquisa apresenta um estudo sobre o conceito de força gravitacional na filosofia da natureza de Isaac Newton. O presente texto foi elaborado a partir dos argumentos desenvolvidos por Newton para defender esse conceito em sua obra mais importante, o Philosophiae Naturalis Principia Mathematica (1687). Será visto que, em tais argumentos, Newton restringe o conceito de força gravitacional a partir de um tratamento matemático, que ele próprio elaborou em sua obra. Por outro lado, Ne...
British physics Newton's law of funding
In Britain, fundamental physics is in a pickle ISAAC NEWTON, besides being the founder of modern physics, was also master of Britain's mint. That is a precedent which many British physicists must surely wish had become traditional. At the moment, money for physics is in short supply in Britain.
The Cooling Law and the Search for a Good Temperature Scale, from Newton to Dalton
Besson, Ugo
The research on the cooling law began with an article by Newton published in 1701. Later, many studies were performed by other scientists confirming or confuting Newton's law. This paper presents a description and an interpretation of Newton's article, provides a short overview of the research conducted on the topic during the 18th century, and…
The Schrödinger–Newton equation and its foundations
Bahrami, Mohammad; Großardt, André; Donadi, Sandro; Bassi, Angelo
The necessity of quantising the gravitational field is still subject to an open debate. In this paper we compare the approach of quantum gravity, with that of a fundamentally semi-classical theory of gravity, in the weak-field non-relativistic limit. We show that, while in the former case the Schrödinger equation stays linear, in the latter case one ends up with the so-called Schrödinger–Newton equation, which involves a nonlinear, non-local gravitational contribution. We further discuss that the Schrödinger–Newton equation does not describe the collapse of the wave-function, although it was initially proposed for exactly this purpose. Together with the standard collapse postulate, fundamentally semi-classical gravity gives rise to superluminal signalling. A consistent fundamentally semi-classical theory of gravity can therefore only be achieved together with a suitable prescription of the wave-function collapse. We further discuss, how collapse models avoid such superluminal signalling and compare the nonlinearities appearing in these models with those in the Schrödinger–Newton equation. (paper)
Quantification of Tissue Trauma following Insulin Pen Needle Insertions in Skin
Jensen, Casper Bo; Larsen, Rasmus; Vestergaard, Jacob Schack
Objective: Within the field of pen needle development, most research on needle design revolves around mechanical tensile testing and patient statements. Only little has been published on the actual biological skin response to needle insertions. The objective of this study was to develop a computa......Objective: Within the field of pen needle development, most research on needle design revolves around mechanical tensile testing and patient statements. Only little has been published on the actual biological skin response to needle insertions. The objective of this study was to develop...... a computational method to quantify tissue trauma based on skin bleeding and immune response. Method: Two common sized pen needles of 28G (0.36mm) and 32G (0.23mm) were inserted into skin of sedated LYD pigs prior to termination. Four pigs were included and a total of 32 randomized needle insertions were conducted...... diameter. Conclusion: A computational and quantitative method has been developed to assess tissue trauma following insulin pen needle insertions. Application of the method is tested by conduction of a needle diameter study. The obtained quantitative measures of tissue trauma correlate positively to needle...
Microstructural control over soluble pentacene deposited by capillary pen printing for organic electronics.
Lee, Wi Hyoung; Min, Honggi; Park, Namwoo; Lee, Junghwi; Seo, Eunsuk; Kang, Boseok; Cho, Kilwon; Lee, Hwa Sung
Research into printing techniques has received special attention for the commercialization of cost-efficient organic electronics. Here, we have developed a capillary pen printing technique to realize a large-area pattern array of organic transistors and systematically investigated self-organization behavior of printed soluble organic semiconductor ink. The capillary pen-printed deposits of organic semiconductor, 6,13-bis(triisopropylsilylethynyl) pentacene (TIPS_PEN), was well-optimized in terms of morphological and microstructural properties by using ink with mixed solvents of chlorobenzene (CB) and 1,2-dichlorobenzene (DCB). Especially, a 1:1 solvent ratio results in the best transistor performances. This result is attributed to the unique evaporation characteristics of the TIPS_PEN deposits where fast evaporation of CB induces a morphological evolution at the initial printed position, and the remaining DCB with slow evaporation rate offers a favorable crystal evolution at the pinned position. Finally, a large-area transistor array was facilely fabricated by drawing organic electrodes and active layers with a versatile capillary pen. Our approach provides an efficient printing technique for fabricating large-area arrays of organic electronics and further suggests a methodology to enhance their performances by microstructural control of the printed organic semiconducting deposits.
[Isaac Newton's Anguli Contactus method].
Wawrzycki, Jarosław
In this paper we discuss the geometrical method for calculating the curvature of a class of curves from the third Book of Isaac Newton's Principia. The method involves any curve which is generated from an elementary curve (actually from any curve whose curvature we known of) by means of transformation increasing the polar angular coordinate in a constant ratio, but unchanging the polar radial angular coordinate.
Anisotropic harmonic oscillator, non-commutative Landau problem and exotic Newton-Hooke symmetry
Alvarez, Pedro D.; Gomis, Joaquim; Kamimura, Kiyoshi; Plyushchay, Mikhail S.
We investigate the planar anisotropic harmonic oscillator with explicit rotational symmetry as a particle model with non-commutative coordinates. It includes the exotic Newton-Hooke particle and the non-commutative Landau problem as special, isotropic and maximally anisotropic, cases. The system is described by the same (2+1)-dimensional exotic Newton-Hooke symmetry as in the isotropic case, and develops three different phases depending on the values of the two central charges. The special cases of the exotic Newton-Hooke particle and non-commutative Landau problem are shown to be characterized by additional, so(3) or so(2,1) Lie symmetry, which reflects their peculiar spectral properties
Isaac Newton Institute of Chile: The fifteenth anniversary of its "Yugoslavia" Branch
Dimitrijević, M. S.
In 2002, the Isaac Newton Institute of Chile established in Belgrade its "Yugoslavia" Branch, one of 15 branches in nine countries in Eastern Europe and Eurasia. On the occasion of fifteen years since its foundation, the activities of "Yugoslavia" Branch of the Isaac Newton Institute of Chile are briefly reviewed.
The Newtonian Moment - Isaac Newton and the Making of Modern Culture
Feingold, Mordechai
Isaac Newton is a legendary figure whose mythical dimension threatens to overshadow the actual man. The story of the apple falling from the tree may or may not be true, but Isaac Newton's revolutionary discoveries and their importance to the Enlightenment era and beyond are undeniable. The Newtonian Moment , a companion volume to a forthcoming exhibition by the New York Public Library, investigates the effect that Newton's theories and discoveries had, not only on the growth of science, but also on the very shape of modern culture and thought. Newton's scientific work at Cambridge was groundbreaking. From his optical experiments with prisms during the 1660s to the publication of both Principia (1687) and Opticks (1704), Newton's achievements were widely disseminated, inciting tremendous interest and excitement. Newtonianism developed into a worldview marked by many tensions: between modernity and the old guard, between the humanities and science, and the public battles between great minds. The Newtonian Moment illuminates the many facets of his colossal accomplishments, as well as the debates over the kind of knowledge that his accomplishments engendered. The book contributes to a greater understanding of the world today by offering a panoramic view of the profound impact of Newtonianism on the science, literature, art, and religion of the Enlightenment. Copiously illustrated with items drawn from the collections of the New York Public Library as well as numerous other libraries and museums, The Newtonian Moment enlightens its audience with a guided and in-depth look at the man, his world, and his enduring legacy.
A flexible approach to light pen calibration for a monocular-vision-based coordinate measuring system
Fu, Shuai; Zhang, Liyan; Ye, Nan; Liu, Shenglan; Zhang, WeiZhong
A monocular-vision-based coordinate measuring system (MVB-CMS) obtains the 3D coordinates of the probe tip center of a light pen by analyzing the monocular image of the target points on the light pen. The light pen calibration, including the target point calibration and the probe tip center calibration, is critical to guarantee the accuracy of the MVB-CMS. The currently used method resorts to special equipment to calibrate the feature points on the light pen in a separate offsite procedure and uses the system camera to calibrate the probe tip center onsite. Instead, a complete onsite light pen calibration method is proposed in this paper. It needs only several auxiliary target points with the same visual features of the light pen targets and two or more cone holes with known distance(s). The target point calibration and the probe tip center calibration are jointly implemented by simply taking two groups of images of the light pen with the camera of the system. The proposed method requires no extra equipment other than the system camera for the calibration, so it is easier to implement and flexible for use. It has been incorporated in a large field-of-view MVB-CMS, which uses active luminous infrared LEDs as the target points. Experimental results demonstrate the accuracy and effectiveness of the proposed method. (paper)
(Anti)hydrogen recombination studies in a nested Penning trap
Quint, W.; Kaiser, R.; Hall, D.; Gabrielse, G.
Extremely cold antiprotons, stored in Penning trap at 4 K, open the way toward the production and study of cold antihydrogen. We have begun experimentally investigating the possibility to recombine cold positrons and antiprotons within nested Penning traps. Trap potentials are adjusted to allow cold trapped protons (and positive helium ions) to pass through cold trapped electrons. Electrons, protons and ions are counted by ejecting them to a cold channel plate and by nondestructive radiofrequency techniques. The effect of the space charge of one trapped species upon another trapped species passing through is clearly observed. (orig.)
III. Penning ionization, associative ionization and chemi-ionization processes
Cermak, V.
Physical mechanisms of three important ionization processes in a cold plasma and the methods of their experimental study are discussed. An apparatus for the investigation of the Penning ionization using ionization processes of long lived metastable rare gas atoms is described. Methods of determining interaction energies and ionization rates from the measured energy spectra of the originating electrons are described and illustrated by several examples. Typical associative ionization processes are listed and the ionization rates are compared with those of the Penning ionization. Interactions with short-lived excited particles and the transfer of excitation without ionization are discussed. (J.U.)
Lack of semantic priming effects in famous person recognition in Mild Cognitive Impairment.
Brambati, Simona M; Peters, Frédéric; Belleville, Sylvie; Joubert, Sven
Growing evidence indicates that individuals with Mild Cognitive Impairment (MCI) manifest semantic deficits that are often more severe for items that are characterized by a unique semantic and lexical association, such as famous people and famous buildings, than common concepts, such as objects. However, it is still controversial whether the semantic deficits observed in MCI are determined by a degradation of semantic information or by a deficit in intentional access to semantic knowledge. Here we used a semantic priming task in order to assess the integrity of the semantic system without requiring explicit access to this system. This paradigm may provide new insights in clarifying the nature of the semantic deficits in MCI. We assessed the semantic and repetition priming effect in 13 individuals with MCI and 13 age-matched controls who engaged in a familiarity judgment task of famous names. In the semantic priming condition, the prime was the name of a member of the same occupation category as the target (Tom Cruise-Brad Pitt), while in the repetition priming condition the prime was the same name as the target (Charlie Chaplin-Charlie Chaplin). The results showed a defective priming effect in MCI in the semantic but not in the repetition priming condition. Specifically, when compared to controls, MCI patients did not show a facilitation effect in responding to the same occupation prime-target pairs, but they showed an equivalent facilitation effect when the target was the same name as the prime. The present results provide support to the hypothesis that the semantic impairments observed in MCI cannot be uniquely ascribed to a deficit in intentional access to semantic information. Instead, these findings point to the semantic nature of these deficits and, in particular, to a degraded representation of semantic information concerning famous people. Copyright © 2011 Elsevier Srl. All rights reserved.
Newton, Goethe and the process of perception: an approach to design
Platts, Jim
Whereas Newton traced a beam of white light passing through a prism and fanning out into the colours of the rainbow as it was refracted, Goethe looked through a prism and was concerned with understanding what his eye subjectively saw. He created a sequence of experiments which produced what appeared to be anomalies in Newton's theory. What he was carefully illustrating concerns limitations accepted when following a scientifically objective approach. Newton was concerned with the description of 'facts' derived from the analysis of observations. Goethe was concerned with the synthesis of meaning. He then went on to describe subjective techniques for training 'the mind's eye' to work efficiently in the subjective world of the imagination. Derided as 'not science', what he was actually describing is the skill which is central to creative design.
Smartphone Versus Pen-and-Paper Data Collection of Infant Feeding Practices in Rural China
Zhang, Shuyi; Wu, Qiong; van Velthoven, Michelle HMMT; Chen, Li; Car, Josip; Rudan, Igor; Li, Ye; Scherpbier, Robert W
Background Maternal, Newborn, and Child Health (MNCH) household survey data are collected mainly with pen-and-paper. Smartphone data collection may have advantages over pen-and-paper, but little evidence exists on how they compare. Objective To compare smartphone data collection versus the use of pen-and-paper for infant feeding practices of the MNCH household survey. We compared the two data collection methods for differences in data quality (data recording, data entry, open-ended answers, and interrater reliability), time consumption, costs, interviewers' perceptions, and problems encountered. Methods We recruited mothers of infants aged 0 to 23 months in four village clinics in Zhaozhou Township, Zhao County, Hebei Province, China. We randomly assigned mothers to a smartphone or a pen-and-paper questionnaire group. A pair of interviewers simultaneously questioned mothers on infant feeding practices, each using the same method (either smartphone or pen-and-paper). Results We enrolled 120 mothers, and all completed the study. Data recording errors were prevented in the smartphone questionnaire. In the 120 pen-and-paper questionnaires (60 mothers), we found 192 data recording errors in 55 questionnaires. There was no significant difference in recording variation between the groups for the questionnaire pairs (P = .32) or variables (P = .45). The smartphone questionnaires were automatically uploaded and no data entry errors occurred. We found that even after double data entry of the pen-and-paper questionnaires, 65.0% (78/120) of the questionnaires did not match and needed to be checked. The mean duration of an interview was 10.22 (SD 2.17) minutes for the smartphone method and 10.83 (SD 2.94) minutes for the pen-and-paper method, which was not significantly different between the methods (P = .19). The mean costs per questionnaire were higher for the smartphone questionnaire (¥143, equal to US $23 at the exchange rate on April 24, 2012) than for the pen
One hundred years of pressure hydrostatics from Stevin to Newton
Chalmers, Alan F
This monograph investigates the development of hydrostatics as a science. In the process, it sheds new light on the nature of science and its origins in the Scientific Revolution. Readers will come to see that the history of hydrostatics reveals subtle ways in which the science of the seventeenth century differed from previous periods. The key, the author argues, is the new insights into the concept of pressure that emerged during the Scientific Revolution. This came about due to contributions from such figures as Simon Stevin, Pascal, Boyle and Newton. The author compares their work with Galileo and Descartes, neither of whom grasped the need for a new conception of pressure. As a result, their contributions to hydrostatics were unproductive. The story ends with Newton insofar as his version of hydrostatics set the subject on its modern course. He articulated a technical notion of pressure that was up to the task. Newton compared the mathematical way in hydrostatics and the experimental way, and sided with t...
Newton Decatur AL water sample polyfluor compound discovery
Data.gov (United States)
U.S. Environmental Protection Agency — All the pertinent information for recreation of the published (hopefully) tables and figures. This dataset is associated with the following publication: Newton, S.,...
penORNL: a parallel Monte Carlo photon and electron transport package using PENELOPE
Bekar, Kursat B.; Miller, Thomas Martin; Patton, Bruce W.; Weber, Charles F.
The parallel Monte Carlo photon and electron transport code package penORNL was developed at Oak Ridge National Laboratory to enable advanced scanning electron microscope (SEM) simulations on high-performance computing systems. This paper discusses the implementations, capabilities and parallel performance of the new code package. penORNL uses PENELOPE for its physics calculations and provides all available PENELOPE features to the users, as well as some new features including source definitions specifically developed for SEM simulations, a pulse-height tally capability for detailed simulations of gamma and x-ray detectors, and a modified interaction forcing mechanism to enable accurate energy deposition calculations. The parallel performance of penORNL was extensively tested with several model problems, and very good linear parallel scaling was observed with up to 512 processors. penORNL, along with its new features, will be available for SEM simulations upon completion of the new pulse-height tally implementation.
Is Queen Victoria Lecturing Today? Teaching Human Sexuality Using Famous Personalities.
Parrot, Andrea
Describes a technique for teaching human sexuality in the undergraduate classroom in which the teacher portrays a famous person presenting sexuality topics from his or her perspective. Describes the content of several of these "guest lecturers." Explains the benefits and potential problems of the method. (AEM)
KEMAMPUAN PEMECAHAN MASALAH HUKUM GERAK NEWTON MAHASISWA MELALUI PEMBELAJARAN COOPERATIVE PROBLEM SOLVING
Agung Wahyu Nurcahyo
Full Text Available The purpose of this study was to describe the increase in problem-solving abilities Newton's laws of motion and students' perceptions of cooperative problem solving (CPS learning. Analysis of the data is based on the student's written answers to the five problems, the results of questionnaires and interviews. This study concluded that: (1 learning CPS make a strong impact (d-effect size = 1.81 to increase problem-solving ability of students Newton's laws of motion, (2 cooperation in the learning group CPS makes the problem easier to solve and misconceptions can be corrected. Tujuan penelitian ini adalah mendeskripsikan peningkatan kemampuan pemecahan masalah hukum gerak Newton, kesulitan yang dialami, dan persepsi mahasiswa terhadap pembelajaran cooperative problem solving (CPS. Analisa data didasarkan pada jawaban tertulis mahasiswa terhadap lima permasalahan, hasil angket dan wawancara. Penelitian ini berkesimpulan bahwa (1 pembelajaran CPS memberikan dampak yang kuat (d-effect size=1,81 terhadap peningkatan kemampuan pemecahan masalah hukum gerak Newton mahasiswa dan (2 kerjasama kelompok dalam pembelajaran CPS membuat permasalahan lebih mudah dipecahkan dan miskonsepsi dapat diperbaiki.
A Fast Newton-Shamanskii Iteration for a Matrix Equation Arising from M/G/1-Type Markov Chains
Pei-Chang Guo
Full Text Available For the nonlinear matrix equations arising in the analysis of M/G/1-type and GI/M/1-type Markov chains, the minimal nonnegative solution G or R can be found by Newton-like methods. We prove monotone convergence results for the Newton-Shamanskii iteration for this class of equations. Starting with zero initial guess or some other suitable initial guess, the Newton-Shamanskii iteration provides a monotonically increasing sequence of nonnegative matrices converging to the minimal nonnegative solution. A Schur decomposition method is used to accelerate the Newton-Shamanskii iteration. Numerical examples illustrate the effectiveness of the Newton-Shamanskii iteration.
A Rapid and Sensitive Assay for the Detection of Benzylpenicillin (PenG in Milk.
Anna Pennacchio
Full Text Available Antibiotics, such as benzyl-penicillin (PenG and cephalosporin, are the most common compounds used in animal therapy. Their massive and illegal use in animal therapy and prophylaxis inevitably causes the presence of traces in foods of animal origin (milk and meat, which creates several problems for human health. With the aim to prevent the negative impact of β-lactam and, in particular, PenG residues present in the milk on customer health, many countries have established maximum residue limits (MRLs. To cope with this problem here, we propose an effective alternative, compared to the analytical methods actually employed, to quantify the presence of penicillin G using the surface plasmon resonance (SPR method. In particular, the PenG molecule was conjugated to a protein carrier to immunize a rabbit and produce polyclonal antibodies (anti-PenG. The produced antibodies were used as molecular recognition elements for the design of a competitive immune-assay for the detection of PenG by SPR experiments. The detection limit of the developed assay was found to be 8.0 pM, a value much lower than the MRL of the EU regulation limit that is fixed at 12 nM. Thus, our results clearly show that this system could be successfully suitable for the accurate and easy determination of PenG.
Application of Quasi-Newton methods to the analysis of axisymmetric pressure vessels
Parisi, D.A.C.
This work studies the application of Quasi-Newton techniques to material nonlinear analysis of axisymmetrical pressure vessels by the finite element method. In the formulation the material bahavior is described by an isotropic elastoplastic model with strain hardening. The continum is discretized through triangular finite elements of axisymmetrical solids with linear interpolation of the displacement field. The incremental governing equations are derived by the virtual work. The solution of the system of simultaneous nonlinear equations is solved iteratively by the Quasi-Newton method employing the BFGS update. The numerical performance of the proposed method is compared with the Newton-Raphson method and some of its variants through some selected examples. (author) [pt
FAMOUS, faster: using parallel computing techniques to accelerate the FAMOUS/HadCM3 climate model with a focus on the radiative transfer algorithm
P. Hanappe
Full Text Available We have optimised the atmospheric radiation algorithm of the FAMOUS climate model on several hardware platforms. The optimisation involved translating the Fortran code to C and restructuring the algorithm around the computation of a single air column. Instead of the existing MPI-based domain decomposition, we used a task queue and a thread pool to schedule the computation of individual columns on the available processors. Finally, four air columns are packed together in a single data structure and computed simultaneously using Single Instruction Multiple Data operations.
The modified algorithm runs more than 50 times faster on the CELL's Synergistic Processing Element than on its main PowerPC processing element. On Intel-compatible processors, the new radiation code runs 4 times faster. On the tested graphics processor, using OpenCL, we find a speed-up of more than 2.5 times as compared to the original code on the main CPU. Because the radiation code takes more than 60 % of the total CPU time, FAMOUS executes more than twice as fast. Our version of the algorithm returns bit-wise identical results, which demonstrates the robustness of our approach. We estimate that this project required around two and a half man-years of work.
[Preference for etanercept pen versus syringe in patients with chronic arthritis. Nurse education workshop].
Garcia-Diaz, Silvia; Girabent-Farrés, Montserrat; Roig-Vilaseca, Daniel; Reina, Delia; Cerdà , Dacia; González, Marina; Torrente-Segarra, Vicenç; Fíguls, Ramon; Corominas, Hèctor
The aims of this study are to evaluate the level of fear of post-injection pain prior to the administration, the difficulty in handling the device, and the level of satisfaction of patients using a pre-filled syringe versus an etanercept pen, as well as to evaluate the usefulness of the training given by nursing staff prior to starting with the pen, and the preferences of patients after using both devices. A prospective study was designed to follow-up a cohort of patients during a 6 months period. The data was collected using questionnaires and analyzed with SPSS 18.00. Rank and McNemar tests were performed. Statistical significance was pre-set at an α level of 0.05. A total of 29 patients were included, of whom 69% female, and with a mean age 52.5±10.9 years. Of these, 48% had rheumatoid arthritis, 28% psoriatic arthritis, 21% ankylosing spondylitis, and 3% undifferentiated spondyloarthropathy. There were no statistically significant differences either with the fear or pain or handling of the device between the syringe and the pen (P=.469; P=.812; P=.169 respectively). At 6 months, 59% of patients referred to being satisfied or very satisfied with the pen. Almost all (93%) found useful or very useful the training given by nursing staff prior to using the pen, and 55% preferred the pen over the pre-filled syringe. The etanercept pen is another subcutaneous device option for patients with chronic arthritis. According to the present study, nursing educational workshops before starting this therapy are recommended. Copyright © 2012 Elsevier España, S.L. All rights reserved.
Magnetic Levitation and Newton's Third Law
Aguilar, Horacio Munguia
Newton's third law is often misunderstood by students and even their professors, as has already been pointed out in the literature. Application of the law in the context of electromagnetism can be especially problematic, because the idea that the forces of "action" and "reaction" are equal and opposite independent of the medium through which they…
Isaac Newton and the Royal Mint
Home; Journals; Resonance – Journal of Science Education; Volume 11; Issue 12. Isaac Newton and the Royal Mint. Biman Nath. Article-in-a-Box Volume 11 Issue 12 December 2006 pp 6-7. Fulltext. Click here to view fulltext PDF. Permanent link: https://www.ias.ac.in/article/fulltext/reso/011/12/0006-0007 ...
The importance of being equivalent: Newton's two models of one-body motion
Pourciau, Bruce
As an undergraduate at Cambridge, Newton entered into his "Waste Book" an assumption that we have named the Equivalence Assumption (The Younger): "If a body move progressively in some crooked line [about a center of motion] ..., [then this] crooked line may bee conceived to consist of an infinite number of streight lines. Or else in any point of the croked line the motion may bee conceived to be on in the tangent". In this assumption, Newton somewhat imprecisely describes two mathematical models, a "polygonal limit model" and a "tangent deflected model", for "one-body motion", that is, for the motion of a "body in orbit about a fixed center", and then claims that these two models are equivalent. In the first part of this paper, we study the Principia to determine how the elder Newton would more carefully describe the polygonal limit and tangent deflected models. From these more careful descriptions, we then create Equivalence Assumption (The Elder), a precise interpretation of Equivalence Assumption (The Younger) as it might have been restated by Newton, after say 1687. We then review certain portions of the Waste Book and the Principia to make the case that, although Newton never restates nor even alludes to the Equivalence Assumption after his youthful Waste Book entry, still the polygonal limit and tangent deflected models, as well as an unspoken belief in their equivalence, infuse Newton's work on orbital motion. In particular, we show that the persuasiveness of the argument for the Area Property in Proposition 1 of the Principia depends crucially on the validity of Equivalence Assumption (The Elder). After this case is made, we present the mathematical analysis required to establish the validity of the Equivalence Assumption (The Elder). Finally, to illustrate the fundamental nature of the resulting theorem, the Equivalence Theorem as we call it, we present three significant applications: we use the Equivalence Theorem first to clarify and resolve questions
A comparison of different quasi-newton acceleration methods for partitioned multi-physics codes
Full Text Available & structures, 88/7, pp. 446–457 (2010) 8. J.E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) A Comparison of Quasi-Newton Acceleration Methods 15 9. J.E. Dennis, R.B. Schnabel, Least Change Secant Updates... Dois Metodos de Broyden. Mat. Apl. Comput. 1/2, pp. 135– 143 (1982) 25. J.M. Martinez, A quasi-Newton method with modification of one column per iteration. Com- puting 33, pp. 353–362 (1984) 26. J.M. Martinez, M.C. Zambaldi, An Inverse Column...
The giant and the thief
Socrates Bardi, Jason
In 1696 Isaac Newton made what seems like a bizarre career move. Abandoning Cambridge University and the mathematical pursuits that made him famous, the 53-year-old scientist upped sticks for London to become warden of the Royal Mint. Part administrator, part coining expert, and part criminal prosecutor, this new role would occupy Newton for the remaining three decades of his life.
Objects in Motion
Damonte, Kathleen
One thing scientists study is how objects move. A famous scientist named Sir Isaac Newton (1642-1727) spent a lot of time observing objects in motion and came up with three laws that describe how things move. This explanation only deals with the first of his three laws of motion. Newton's First Law of Motion says that moving objects will continue…
Novel dip-pen nanolithography strategies for nanopatterning
Wu, C.C.
Dip-pen nanolithography (DPN) is an atomic force microscopy (AFM)-based lithography technique offering the possibility of fabricating patterns with feature sizes ranging from micrometers to tens of nanometers, utilizing either top-down or bottom-up strategies. Although during its early development
Twisted Acceleration-Enlarged Newton-Hooke Hopf Algebras
Daszkiewicz, M.
Ten Abelian twist deformations of acceleration-enlarged Newton-Hooke Hopf algebra are considered. The corresponding quantum space-times are derived as well. It is demonstrated that their contraction limit τ → ∞ leads to the new twisted acceleration-enlarged Galilei spaces. (author)
Using a Computer Module to Teach Use of the EpiPen®
Amandeep Singh Rai
Full Text Available Background: The medical literature suggests that patients and physicians are deficient in their ability to use a self-injectable epinephrine device (EpiPen® for management of anaphylaxis. This study aims to determine whether a computer module is an effective tool for the instruction of a technical skill to medical trainees. Methods: We conducted a two group comparison study of 35 Post-Graduate Year 1 and 2 Family Medicine residents. Participants were instructed on use of the EpiPen® using either a written module or a computer module. Participants were evaluated on use of the EpiPen® using standardized objective outcome measures by a blinded assessor. Assessments took place prior to and following instruction, using the assigned learning modality. Results: There were 34 participants who completed the study. Both groups demonstrated significant improvement in demonstrating use of the EpiPen® following training (p <0.001 for both. A significant post-training difference favouring the computer module learners over the written module learners was observed (p = 0.035. However, only 53% and 18% of candidates (computer module and written module, respectively were able to correctly perform all of the checklist steps. Conclusion: While our findings suggest computer modules represent an effective modality for teaching use of the EpiPen® to medical trainees, the low number of candidates who were able to perform all the checklist items regardless of modality needs to be addressed.
The research on the cooling law began with an article by Newton published in 1701. Later, many studies were performed by other scientists confirming or confuting Newton's law. This paper presents a description and an interpretation of Newton's article, provides a short overview of the research conducted on the topic during the 18th century, and discusses the relationships between the research on cooling laws and the definition of a temperature scale, as it was treated in Newton's article and in the work of Dalton, including Dalton's search for the absolute zero of temperature. It is shown that these scientists considered the exponential cooling law as a fundamental principle rather than a conjecture to be tested by means of experiments. The faith in the simplicity of natural laws and the spontaneous idea of proportionality between cause and effect seem to have strongly influenced Newton and Dalton. The topic is developed in a way that can be suitable for both undergraduate students and general physicists.
Newton solution of inviscid and viscous problems
Venkatakrishnan, V.
The application of Newton iteration to inviscid and viscous airfoil calculations is examined. Spatial discretization is performed using upwind differences with split fluxes. The system of linear equations which arises as a result of linearization in time is solved directly using either a banded matrix solver or a sparse matrix solver. In the latter case, the solver is used in conjunction with the nested dissection strategy, whose implementation for airfoil calculations is discussed. The boundary conditions are also implemented in a fully implicit manner, thus yielding quadratic convergence. Complexities such as the ordering of cell nodes and the use of a far field vortex to correct freestream for a lifting airfoil are addressed. Various methods to accelerate convergence and improve computational efficiency while using Newton iteration are discussed. Results are presented for inviscid, transonic nonlifting and lifting airfoils and also for laminar viscous cases. 17 references
E. A. Venter
Full Text Available Die geweldige oplewing van die Christelike wetenskaps- gedagte in ons geeslose tyd, is ongetwyfeld 'n haas onverklaar- bare verskynsel. Dwarsdeur die eeue het Christene ook wetenskap beoefen saam met ongelowiges, maar dit was eers in ons leeftyd dat die principia van die Christelike religie ook vrugbaar gemaak is vir die wetenskapsbeoefening. In hierdie verband sal die name van Dooyeweerd, Vollenhoven, Stoker e.a. steeds met eer vermeld word. Natuurlik het belydende Christene ook voorheen wel deeglik saamgewerk aan die gebou van die wetenskap. Die intieme verband tussen religie, wysbegeerte en wetenskaps beoefening is toe egter nog nie suiwer ingesien nie. Uit hier die tydperk dateer die arbeid van sir Isaac Newton.
Extending Penning trap mass measurements with SHIPTRAP to the heaviest elements
Block, M.; Ackermann, D.; Herfurth, F.; Hofmann, S.; Blaum, K.; Droese, C.; Marx, G.; Schweikhard, L.; Düllmann, Ch. E.; Eibach, M.; Eliseev, S.; Haettner, E.; Plaß, W. R.; Scheidenberger, C.; Heßberger, F. P.; Ramirez, E. Minaya; Nesterenko, D.
Penning-trap mass spectrometry of radionuclides provides accurate mass values and absolute binding energies. Such mass measurements are sensitive indicators of the nuclear structure evolution far away from stability. Recently, direct mass measurements have been extended to the heavy elements nobelium (Z=102) and lawrencium (Z=103) with the Penning-trap mass spectrometer SHIPTRAP. The results probe nuclear shell effects at N=152. New developments will pave the way to access even heavier nuclides.
CrossRef Space-charge effects in Penning ion traps
Porobić, T; Breitenfeldt, M; Couratin, C; Finlay, P; Knecht, A; Fabian, X; Friedag, P; Fléchard, X; Liénard, E; Ban, G; Zákoucký, D; Soti, G; Van Gorp, S; Weinheimer, Ch; Wursten, E; Severijns, N
The influence of space-charge on ion cyclotron resonances and magnetron eigenfrequency in a gas-filled Penning ion trap has been investigated. Off-line measurements with View the MathML source using the cooling trap of the WITCH retardation spectrometer-based setup at ISOLDE/CERN were performed. Experimental ion cyclotron resonances were compared with ab initio Coulomb simulations and found to be in agreement. As an important systematic effect of the WITCH experiment, the magnetron eigenfrequency of the ion cloud was studied under increasing space-charge conditions. Finally, the helium buffer gas pressure in the Penning trap was determined by comparing experimental cooling rates with simulations.
The experiment presented in this thesis has been designed to test Newton's law of gravitation in the limit of small accelerations caused by weak gravitational forces. It is located at DESY, Hamburg, and is a modification of an experiment that was carried out in Wuppertal, Germany, until 2002 in order to measure the gravitational constant G. The idea of testing Newton's law in the case of small accelerations emerged from the question whether the flat rotation curves of spiral galaxies can be traced back to Dark Matter or to a law of gravitation that deviates from Newton on cosmic scales like e.g. MOND (Modified Newtonian Dynamics). The core of this experiment is a microwave resonator which is formed by two spherical concave mirrors that are suspended as pendulums. Masses between 1 and 9 kg symmetrically change their distance to the mirrors from far to near positions. Due to the increased gravitational force the mirrors are pulled apart and the length of the resonator increases. This causes a shift of the resonance frequency which can be translated into a shift of the mirror distance. The small masses are sources of weak gravitational forces and cause accelerations on the mirrors of about 10{sup -10} m/s{sup 2}. These forces are comparable to those between stars on cosmic scales and the accelerations are in the vicinity of the characteristic acceleration of MOND a{sub 0} {approx} 1.2.10{sup -10} m/s{sup 2}, where deviations from Newton's law are expected. Thus Newton's law could be directly checked for correctness under these conditions. First measurements show that due to the sensitivity of this experiment many systematic influences have to be accounted for in order to get consistent results. Newton's law has been confirmed with an accuracy of 3%. MOND has also been checked. In order to be able to distinguish Newton from MOND with other interpolation functions the accuracy of the experiment has to be improved. (orig.)
Effects of lignite application on ammonia and nitrous oxide emissions from cattle pens
Sun, Jianlei, E-mail: [email protected] [Faculty of Veterinary and Agricultural Sciences, The University of Melbourne, VIC 3010 (Australia); Bai, Mei [Faculty of Veterinary and Agricultural Sciences, The University of Melbourne, VIC 3010 (Australia); Shen, Jianlin [Institute of Subtropical Agriculture, Chinese Academy of Sciences, Changsha 410125 (China); Griffith, David W.T. [Department of Chemistry, University of Wollongong, NSW 2522 (Australia); Denmead, Owen T. [Faculty of Veterinary and Agricultural Sciences, The University of Melbourne, VIC 3010 (Australia); Hill, Julian [Ternes Agricultural Consulting Pty Ltd, Upwey, VIC 3158 (Australia); Lam, Shu Kee; Mosier, Arvin R. [Faculty of Veterinary and Agricultural Sciences, The University of Melbourne, VIC 3010 (Australia); Chen, Deli, E-mail: [email protected] [Faculty of Veterinary and Agricultural Sciences, The University of Melbourne, VIC 3010 (Australia)
Beef cattle feedlots are a major source of ammonia (NH{sub 3}) emissions from livestock industries. We investigated the effects of lignite surface applications on NH{sub 3} and nitrous oxide (N{sub 2}O) emissions from beef cattle feedlot pens. Two rates of lignite, 3 and 6 kg m{sup −2}, were tested in the treatment pen. No lignite was applied in the control pen. Twenty-four Black Angus steers were fed identical commercial rations in each pen. We measured NH{sub 3} and N{sub 2}O concentrations continuously from 4th Sep to 13th Nov 2014 using Quantum Cascade Laser (QCL) NH{sub 3} analysers and a closed-path Fourier Transform Infrared Spectroscopy analyser (CP-FTIR) in conjunction with the integrated horizontal flux method to calculate NH{sub 3} and N{sub 2}O fluxes. During the feeding period, 16 and 26% of the excreted nitrogen (N) (240 g N head{sup −1} day{sup −1}) was lost via NH{sub 3} volatilization from the control pen, while lignite application decreased NH{sub 3} volatilization to 12 and 18% of the excreted N, for Phase 1 and Phase 2, respectively. Compared to the control pen, lignite application decreased NH{sub 3} emissions by approximately 30%. Nitrous oxide emissions from the cattle pens were small, 0.10 and 0.14 g N{sub 2}O-N head{sup −1} day{sup −1} (< 0.1% of excreted N) for the control pen, for Phase 1 and Phase 2, respectively. Lignite application increased direct N{sub 2}O emissions by 40 and 57%, to 0.14 and 0.22 g N{sub 2}O-N head{sup −1} day{sup −1}, for Phase 1 and Phase 2, respectively. The increase in N{sub 2}O emissions resulting from lignite application was counteracted by the lower indirect N{sub 2}O emission due to decreased NH{sub 3} volatilization. Using 1% as a default emission factor of deposited NH{sub 3} for indirect N{sub 2}O emissions, the application of lignite decreased total N{sub 2}O emissions. - Graphical abstract: Lignite application substantially decreased NH{sub 3} emissions from cattle feedlots and increased
De las Leyes de Newton a la Guerra de Troya
Plastino, �ngel Ricardo
La publicación en 1687 del libro Philosophia Naturalis Principia Mathematica por Issac Newton marcó un importante hito en la historia del pensamiento humano. Sobre la base de tres sencillos principios de movimiento y de la ley de gravitación universal, y mediante razonamientos matemáticos, Newton logró explicar y unificar dentro de un esquema conceptual coherente una gran cantidad de fenómenos naturales: el movimiento de los planetas, las mareas, la forma de la Tierra, entre otros. Más aún, N...
Applied investigation of Moessbauer effect for the famous ancient chinese porcelains
Gao Zhengyao; Chen Songhua; Shen Zuocheng
The famous Ru porcelain, Jun porcelain and Guan porcelain of Song Dynasty and Yuan Dynasty are analyzed. The Moessbauer parameters of the ancient porcelains and the imitative ancient porcelains are compared. The firing techniques, coloring mechanism and microstructures of the ancient Chinese porcelains have been discussed. (7 figs., 4 tabs.)
Non-relativistic conformal symmetries and Newton-Cartan structures
Duval, C; Horvathy, P A
This paper provides us with a unifying classification of the conformal infinitesimal symmetries of non-relativistic Newton-Cartan spacetime. The Lie algebras of non-relativistic conformal transformations are introduced via the Galilei structure. They form a family of infinite-dimensional Lie algebras labeled by a rational 'dynamical exponent', z. The Schroedinger-Virasoro algebra of Henkel et al corresponds to z = 2. Viewed as projective Newton-Cartan symmetries, they yield, for timelike geodesics, the usual Schroedinger Lie algebra, for which z = 2. For lightlike geodesics, they yield, in turn, the Conformal Galilean Algebra (CGA) of Lukierski, Stichel and Zakrzewski (alias 'alt' of Henkel), with z = 1. Physical systems realizing these symmetries include, e.g. classical systems of massive and massless non-relativistic particles, and also hydrodynamics, as well as Galilean electromagnetism.
EFL Students' Perceptions of Social Issues in Famous Works of Art
Bautista Urrego, Lizmendy Zuhey; Parra Toro, Ingrid Judith
This article reports on a qualitative, descriptive, and interpretative research intervention case study of English as a foreign language students' construction of perceptions on social issues found in famous works of art. Participants in this study engaged in the practice of critical thinking as a strategy to appreciate art that expresses social…
Conceptual Understanding and Representation Quality through Multi-representation Learning on Newton Law Content
Suci Furwati
Full Text Available Abstract: Students who have good conceptual acquisition will be able to represent the concept by using multi representation. This study aims to determine the improvement of students' understanding of the concept of Newton's Law material, and the quality of representation used in solving problems on Newton's Law material. The results showed that the concept acquisition of students increased from the average of 35.32 to 78.97 with an effect size of 2.66 (strong and N-gain of 0.68 (medium. The quality of each type of student representation also increased from level 1 and level 2 up to level 3. Key Words: concept aquisition, represetation quality, multi representation learning, Newton's Law Abstrak: Siswa yang memiliki penguasaan konsep yang baik akan mampu merepresentasikan konsep dengan menggunakan multi representasi. Penelitian ini bertujuan untuk mengetahui peningkatan pemahaman konsep siswa SMP pada materi Hukum Newton, dan kualitas representasi yang digunakan dalam menyelesaikan masalah pada materi Hukum Newton. Hasil penelitian menunjukkan bahwa penguasaan konsep siswa meningkat dari rata-rata 35,32 menjadi 78,97 dengan effect size sebesar 2,66 (kuat dan N-gain sebesar 0,68 (sedang. Kualitas tiap jenis representasi siswa juga mengalami peningkatan dari level 1 dan level 2 naik menjadi level 3. Kata kunci: hukum Newton, kualitas representasi, pemahaman konsep, pembelajaran multi representasi
with smoothness promoting optimization/regularization schemes. However, this type of regularization schemes are known to perform poorly when applied in imagining domains with sparse content or sharp variations. In this work, an inexact Newton algorithm
Ecosystem Pen Pals: Using Place-Based Marine Science and Culture to Connect Students
Wiener, Carlie S.; Matsumoto, Karen
The marine environment provides a unique context for students to explore both natural and cultural connections. This paper reports preliminary findings on Ecosystem Pen Pals, an ocean literacy program for 4th and 5th graders focused on using a pen pal model for integrating traditional ecological knowledge into marine science. Surveys with…
Flexible AMOLED display on polyethylene napthalate (PEN) foil with metal-oxide TFT backplane
Tripathi, A.K.; Putten, B. van der; Steen, J.L. van der; Tempelaars, K.; Cobb, B.; Ameys, M.; Ke, T.H.; Myny, K.; Steudel, S.; Nag, M.; Schols, S.; Vicca, P.; Smout, S.; Genoe, J.; Heremans, P.; Yakimets, I.; Gelinck, G.H.
We present a top emitting monochrome AMOLED display with 85dpi resolution using an amorphous Indium-Gallium-Zinc-Oxide (IGZO) TFT backplane on PEN-foil. Maximum processing temperature was limited to 150 °C in order to ensure an overlay accuracy < 3μm on PEN foil. The backplane process flow is based
The problem of Newton dynamics
Roman Roldan, R.
The problem of the teaching of Newton's principles of dynamics at High School level is addressed. Some usages, reasoning and wording, are pointed as the responsible for the deficient results which are revealed in the background of the first year University students in Physics. A methodology based on simplifying the common vocabulary is proposed in order to provide to the students with a clearer view of the dynamic problems. Some typical examples are shown which illustrate the proposal. (Author)
Studying the variability in the Raman signature of writing pen inks.
Braz, André; López-López, María; García-Ruiz, Carmen
This manuscript aims to study the inter and intra brand, model and batch variability in the Raman spectral signature among modern pen inks that will help forensic document examiners during the interpretation process. Results showed that most oil-based samples have similar Raman signatures that are characteristic of the Crystal Violet dye, independently of the brand. Exception was the Pilot samples that use Victoria Pure Blue BO instead. This small inter-brand variability makes oil-based pens difficult to discriminate by brand. On the contrary, gel and liquid-based samples use different colorants such as Rhodamine B, Copper Phthalocyanine, Ethyl Violet and Victoria Blue B. No particular pattern was observed regarding the colorants used by each brand, except the Pilot samples that were the only brand using the Victoria Blue B dye, which is a clear distinct feature. Additionally, the intra-brand variability was also large among gel-based Pilot samples. The small spectral differences observed among several batches of Bic Crystal Medium samples demonstrated that changes were introduced in their chemical formula over the years. The intra-batch variability was small and no spectral differences were observed within batches. This manuscript demonstrates the potential of Raman spectroscopy for discriminating pens inks from different brands and models and even, batches. Additionally, the main colorants used in modern pens were also identified. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Besson, Ugo, E-mail: [email protected] [Department of Physics ' A Volta' , University of Pavia, Via A Bassi 6, 27100 Pavia (Italy)
Electrospray deposition from fountain pen AFM probes
Geerlings, J.; Sarajlic, Edin; Berenschot, Johan W.; Abelmann, Leon; Tas, Niels Roelof
In this paper we present for the first time electrospraying from fountain pen probes. By using electrospray contactless deposition in an AFM setup becomes possible. Experiments on a dedicated setup were carried out as first step towards this goal. Spraying from 8 and 2 µm apertures was observed. For
Weight, the Normal Force and Newton's Third Law: Dislodging a Deeply Embedded Misconception
Low, David; Wilson, Kate
On entry to university, high-achieving physics students from all across Australia struggle to identify Newton's third law force pairs. In particular, less than one in ten can correctly identify the Newton's third law reaction pair to the weight of (gravitational force acting on) an object. Most students incorrectly identify the normal force on the…
Franklin Delano Roosevelt: a famous patient.
Hart, Curtis W
Franklin Delano Roosevelt is arguably one of the greatest of American Presidents. His encounter with the polio that crippled him at an early age and its transformative impact upon him are here discussed with particular reference to his relationship with his physician, Dr. George Draper. This transformation liberated energy in Roosevelt to lead and to show empathy for others in ways that both challenged the political and social status quo in the U.S.A. as well as helped save the world from the threat of Fascism in World War II. This essay seeks to demonstrate how an investigation of the life and struggles of this famous patient is one avenue for relating the study of the humanities to medical education. An earlier version of this paper was presented as the Heberden Lecture in the History of Medicine at the New York Academy of Medicine in 2012.
DE NEWTON A EINSTEIN: A DEBATE EL DESTINO DEL UNIVERSO
ROGELIO PARREIRA
Full Text Available En este artículo se describe la historia del pensamiento científico en términos de las teorías de la inercia, el espacio absoluto, la relatividad y la gravitación; de cómo Newton utilizó el trabajo de los primeros investigadores en sus teorías, y Einstein las teorías de Newton en la suya, para tratar de explicar el destino del universo. Es la descripción de un proceso revolucionario del conocimiento científico, y sus aportes al desarrollo de muchos otros campos del saber
Designing stellarator coils by a modified Newton method using FOCUS
Zhu, Caoxiang; Hudson, Stuart R.; Song, Yuntao; Wan, Yuanxi
To find the optimal coils for stellarators, nonlinear optimization algorithms are applied in existing coil design codes. However, none of these codes have used the information from the second-order derivatives. In this paper, we present a modified Newton method in the recently developed code FOCUS. The Hessian matrix is calculated with analytically derived equations. Its inverse is approximated by a modified Cholesky factorization and applied in the iterative scheme of a classical Newton method. Using this method, FOCUS is able to recover the W7-X modular coils starting from a simple initial guess. Results demonstrate significant advantages.
Quasi-Newton methods for the acceleration of multi-physics codes
Full Text Available .E. Dennis, J.J. More´, Quasi-Newton methods: motivation and theory. SIAM Rev. 19, pp. 46–89 (1977) [11] J.E. Dennis, R.B. Schnabel, Least Change Secant Updates for quasi- Newton methods. SIAM Rev. 21, pp. 443–459 (1979) [12] G. Dhondt, CalculiX CrunchiX USER...) [25] J.M. Martinez, M.C. Zambaldi, An Inverse Column-Updating Method for solving large-scale nonlinear systems of equations. Optim. Methods Softw. 1, pp. 129–140 (1992) [26] J.M. Martinez, On the convergence of the column-updating method. Comp. Appl...
Convergence and Applications of a Gossip-Based Gauss-Newton Algorithm
Li, Xiao; Scaglione, Anna
The Gauss-Newton algorithm is a popular and efficient centralized method for solving non-linear least squares problems. In this paper, we propose a multi-agent distributed version of this algorithm, named Gossip-based Gauss-Newton (GGN) algorithm, which can be applied in general problems with non-convex objectives. Furthermore, we analyze and present sufficient conditions for its convergence and show numerically that the GGN algorithm achieves performance comparable to the centralized algorithm, with graceful degradation in case of network failures. More importantly, the GGN algorithm provides significant performance gains compared to other distributed first order methods.
Deviations from Newton's law in supersymmetric large extra dimensions
Callin, P.; Burgess, C.P.
Deviations from Newton's inverse-squared law at the micron length scale are smoking-gun signals for models containing supersymmetric large extra dimensions (SLEDs), which have been proposed as approaches for resolving the cosmological constant problem. Just like their non-supersymmetric counterparts, SLED models predict gravity to deviate from the inverse-square law because of the advent of new dimensions at sub-millimeter scales. However SLED models differ from their non-supersymmetric counterparts in three important ways: (i) the size of the extra dimensions is fixed by the observed value of the dark energy density, making it impossible to shorten the range over which new deviations from Newton's law must be seen; (ii) supersymmetry predicts there to be more fields in the extra dimensions than just gravity, implying different types of couplings to matter and the possibility of repulsive as well as attractive interactions; and (iii) the same mechanism which is purported to keep the cosmological constant naturally small also keeps the extra-dimensional moduli effectively massless, leading to deviations from general relativity in the far infrared of the scalar-tensor form. We here explore the deviations from Newton's law which are predicted over micron distances, and show the ways in which they differ and resemble those in the non-supersymmetric case
On deviations from Newton's law and the proposal for a 'Fifth Force'
Ferreira, L.A.; Malbouisson, A.P.C.
The results of geophysical and laboratory measurements of Newton's constant of gravitation, seem to disagree by one percent. Attempts to explain this have led to the revival of the proposal for a fifth interaction in Nature. The experimental results on measurements of G and tests of Newton's inverse square law are reviewed. The recent reanalysis of the Eoetvoes experiment and proposals for new experiments are discussed. (Author) [pt
The Messy Nature of Science: Famous Scientists Can Help Clear Up
Sinclair, Alex; Strachan, Amy
Having embraced the inclusion of evolution in the National Curriculum for primary science in England and briefly bemoaned the omission of any physics in key stage 1 (ages 5-7), it was time to focus on the biggest change, that of working scientifically. While the authors were aware of the non-statutory suggestions to study famous scientists such as…
Preliminary study for the National Energy Plan (PEN) uses the Community market as a reference base. El ante-proyecto del Plan Energetico Nacional (PEN) toma referencia basica del mercado comunitario
The National Energy Plan (PEN) for the period 1991 to 2000 lays down basic guidelines for a Spanish energy policy. This includes a wide range of economic measures. The PEN is divided into five main sections with two appendices. The sections are: the international situation; energy demand; energy supply; energy and the environment; and R D policy. The appendices are: Energy-saving and energy-efficiency plan; and General plan for radioactive waste. The PEN provides for 4-year research programmes which aim to reduce the environmental impact of energy production and use. General demand for energy during this period will increase by2.4% and investment in power installations and in the gas sector will be some 1.5 thousand million pesetas. 4 figs., 3 tabs.
Disk-galaxy density distribution from orbital speeds using Newton's law, version 1.1
Given the dimensions(including thickness) of an axisymmetric galaxy, Newton's law is used in integral form to find the density distributions required to match a wide range of orbital speed profiles. Newton's law is not modified and no dark-matter halos are required. The speed distributions can have extreme shapes if they are reasonably smooth. Several examples are given.
A Newton-type neural network learning algorithm
Ivanov, V.V.; Puzynin, I.V.; Purehvdorzh, B.
First- and second-order learning methods for feed-forward multilayer networks are considered. A Newton-type algorithm is proposed and compared with the common back-propagation algorithm. It is shown that the proposed algorithm provides better learning quality. Some recommendations for their usage are given. 11 refs.; 1 fig.; 1 tab
Chemically stable Au nanorods as probes for sensitive surface enhanced scattering (SERS) analysis of blue BIC ballpoint pens
Alyami, Abeer; Saviello, Daniela; McAuliffe, Micheal A. P.; Cucciniello, Raffaele; Mirabile, Antonio; Proto, Antonio; Lewis, Liam; Iacopino, Daniela
Au nanorods were used as an alternative to commonly used Ag nanoparticles as Surface Enhanced Raman Scattering (SERS) probes for identification of dye composition of blue BIC ballpoint pens. When used in combination with Thin Layer Chromatography (TLC), Au nanorod colloids allowed identification of the major dye components of the BIC pen ink, otherwise not identifiable by normal Raman spectroscopy. Thanks to their enhanced chemical stability compared to Ag colloids, Au nanorods provided stable and reproducible SERS signals and allowed easy identification of phthalocyanine and triarylene dyes in the pen ink mixture. These findings were supported by FTIR and MALDI analyses, also performed on the pen ink. Furthermore, the self-assembly of Au nanorods into large area ordered superstructures allowed identification of BIC pen traces. SERS spectra of good intensity and high reproducibility were obtained using Au nanorod vertical arrays, due to the high density of hot spots and morphological reproducibility of these superstructures. These results open the way to the employment of SERS for fast screening analysis and for quantitative analysis of pens and faded pens which are relevant for the fields of forensic and art conservation sciences.
Rap van tong, scherp van pen. Literaire discussiecultuur in Nederlandse praatjespamfletten (circa 1600-1750)
Dingemanse, C.W.
In the early modern period pamphlets constituted the most important medium to influence public opinion in the Netherlands. The thesis Rap van tong, scherp van pen (Glib tongues, sharp pens) focuses on the literary and rhetorical aspects of a remarkable type of pamphlet called praatje (small-talk),
Role of penA polymorphisms for penicillin susceptibility in Neisseria lactamica and Neisseria meningitidis.
Karch, André; Vogel, Ulrich; Claus, Heike
In meningococci, reduced penicillin susceptibility is associated with five specific mutations in the transpeptidase region of penicillin binding protein 2 (PBP2). We showed that the same set of mutations was present in 64 of 123 Neisseria lactamica strains obtained from a carriage study (MIC range: 0.125-2.0mg/L). The PBP2 encoding penA alleles in these strains were genetically similar to those found in intermediate resistant meningococci suggesting frequent interspecies genetic exchange. Fifty-six N. lactamica isolates with mostly lower penicillin MICs (range: 0.064-0.38mg/L) exhibited only three of the five mutations. The corresponding penA alleles were unique to N. lactamica and formed a distinct genetic clade. PenA alleles with no mutations on the other hand were unique to meningococci. Under penicillin selective pressure, genetic transformation of N. lactamica penA alleles in meningococci was only possible for alleles encoding five mutations, but not for those encoding three mutations; the transfer resulted in MICs comparable to those of meningococci harboring penA alleles that encoded PBP2 with five mutations, but considerably lower than those of the corresponding N. lactamica donor strains. Due to a transformation barrier the complete N. lactamica penA could not be transformed into N. meningitidis. In summary, penicillin MICs in N. lactamica were associated with the number of mutations in the transpeptidase region of PBP2. Evidence for interspecific genetic transfer was only observed for penA alleles associated with higher MICs, suggesting that alleles encoding only three mutations in the transpeptidase region are biologically not effective in N. meningitidis. Factors other than PBP2 seem to be responsible for the high levels of penicillin resistance in N. lactamica. A reduction of penicillin susceptibility in N. meningitidis by horizontal gene transfer from N. lactamica is unlikely to happen. Copyright © 2015 Elsevier GmbH. All rights reserved.
Neisseria gonorrhoeae and extended-spectrum cephalosporins in California: surveillance and molecular detection of mosaic penA.
Gose, Severin; Nguyen, Duylinh; Lowenberg, Daniella; Samuel, Michael; Bauer, Heidi; Pandori, Mark
The spread of Neisseria gonorrhoeae strains with mosaic penA alleles and reduced susceptibility to extended-spectrum cephalosporins is a major public health problem. While much work has been performed internationally, little is known about the genetics or molecular epidemiology of N. gonorrhoeae isolates with reduced susceptibility to extended-spectrum cephalosporins in the United States. The majority of N. gonorrhoeae infections are diagnosed without a live culture. Molecular tools capable of detecting markers of extended-spectrum cephalosporin resistance are needed. Urethral N. gonorrhoeae isolates were collected from 684 men at public health clinics in California in 2011. Minimum inhibitory concentrations (MICs) to ceftriaxone, cefixime, cefpodoxime and azithromycin were determined by Etest and categorized according to the U.S. Centers for Disease Control 2010 alert value breakpoints. 684 isolates were screened for mosaic penA alleles using real-time PCR (RTPCR) and 59 reactive isolates were subjected to DNA sequencing of their penA alleles and Neisseria gonorrhoeae multi-antigen sequence typing (NG-MAST). To increase the specificity of the screening RTPCR in detecting isolates with alert value extended-spectrum cephalosporin MICs, the primers were modified to selectively amplify the mosaic XXXIV penA allele. Three mosaic penA alleles were detected including two previously described alleles (XXXIV, XXXVIII) and one novel allele (LA-A). Of the 29 isolates with an alert value extended-spectrum cephalosporin MIC, all possessed the mosaic XXXIV penA allele and 18 were sequence type 1407, an internationally successful strain associated with multi-drug resistance. The modified RTPCR detected the mosaic XXXIV penA allele in urethral isolates and urine specimens and displayed no amplification of the other penA alleles detected in this study. N. gonorrhoeae isolates with mosaic penA alleles and reduced susceptibility to extended-spectrum cephalosporins are currently
Nonsmooth Newton method for Fischer function reformulation of contact force problems for interactive rigid body simulation
Silcowitz, Morten; Niebe, Sarah Maria; Erleben, Kenny
contact response. In this paper, we present a new approach to contact force determination. We reformulate the contact force problem as a nonlinear root search problem, using a Fischer function. We solve this problem using a generalized Newton method. Our new Fischer - Newton method shows improved...... qualities for specific configurations where the most widespread alternative, the Projected Gauss-Seidel method, fails. Experiments show superior convergence properties of the exact Fischer - Newton method....
Pengembangan Media Ice Breaker Talking Pen pada Mata Pelajaran PAI Kelas X SMAN 100 Jakarta
Ati Sulastri
Full Text Available This study aims to find out how to develop media ice breaker talking pen and media feasibility on the subjects of PAI. The research method used is Borg and Gall development model which includes requirement analysis, validation test, and test phase. The result of this development research is ice breaker talking pen media product which consists of command card and music developed through data collection, planning, product development, and validation and testing. Based on the validation results obtained the average score of the material experts of 4.75 (very good, and from the media experts of 3.78 (good, and the results of student responses about this media amounted to 4.39 or very good category. Therefore the ice breaker talking media on the eyes of learning PAI class X is declared eligible for use with very good category. Keywords: Development Model Study, Ice Breaker Talking Pen, PAI Abstrak Penelitian ini bertujuan untuk mengetahui cara mengembangkan media ice breaker talking pen dan kelayakan media tersebut pada mata pelajaran PAI. Metode yang digunakan adalah model pengembangan Borg dan Gall yang meliputi analisis kebutuhan, tahap validasi dan tahap uji coba. Hasil penelitian pengembangan ini adalah produk media ice breaker talking pen yang terdiri dari kartu perintah dan musik yang dikembangkan melalui tahap pengumpulan data, perencanaan, pengembangan produk, serta validasi dan uji coba. Berdasarkan pada hasil validasi didapat skor rata-rata dari ahli materi sebesar 4,75 (sangat baik, dan dari ahli media sebesar 3,78 (baik. Serta hasil dari tanggapan siswa mengenai media ini sebesar 4,39 atau kategori sangat baik. Maka dari itu media ice breaker talking pen pada mata pelajaran PAI kelas X dinyatakan layak untuk digunakan dengan kategori sangat baik. Kata Kunci : Pengembangan Model Pembelajaran, Ice Breaker Talking Pen, PAI
A Newton Algorithm for Multivariate Total Least Squares Problems
WANG Leyang
Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.
Dosing Accuracy of Insulin Aspart FlexPens After Transport Through the Pneumatic Tube System.
Ward, Leah G; Heckman, Michael G; Warren, Amy I; Tran, Kimberly
The purpose of this study was to evaluate whether transporting insulin aspart FlexPens via a pneumatic tube system affects the dosing accuracy of the pens. A total of 115 Novo Nordisk FlexPens containing insulin aspart were randomly assigned to be transported via a pneumatic tube system (n = 92) or to serve as the control (n = 23). Each pen was then randomized to 10 international unit (IU) doses (n = 25) or 30 IU doses (n = 67), providing 600 and 603 doses, respectively, for the pneumatic tube group. The control group also received random assignment to 10 IU doses (n = 6) or 30 IU doses (n = 17), providing 144 and 153 doses, respectively. Each dose was expelled using manufacturer instructions. Weights were recorded, corrected for specific gravity, and evaluated based on acceptable International Organization for Standardization (ISO) dosing limits. In the group of pens transported through the pneumatic tube system, none of the 600 doses of 10 IU (0.0%; 95% CI, 0.0 to 0.6) and none of the 603 doses of 30 IU (0.0%; 95% CI, 0.0 to 0.6) fell outside of the range of acceptable weights. Correspondingly, in the control group, none of the 144 doses at 10 IU (0.0%; 95% CI, 0.0 to 2.5) and none of the 153 doses at 30 IU (0.0%; 95% CI, 0.0 to 2.4) were outside of acceptable ISO limits. Transportation via pneumatic tube system does not appear to compromise dosing accuracy. Hospital pharmacies may rely on the pneumatic tube system for timely and accurate transport of insulin aspart FlexPens.
The effect of sex, slaughter weight and weight gains in PEN-AR-LAN ...
The aim of the study was to determine the effect of sex, body weight and growth rates on basic fattening and slaughter indexes in PEN-AR-LAN fatteners. The research was conducted on 274 PEN-ARLAN hybrid fatteners coming from sows of the Naïma maternal line and was sired by boars of the P-76 meat line. Recorded ...
Running Newton constant, improved gravitational actions, and galaxy rotation curves
Reuter, M.; Weyer, H.
A renormalization group (RG) improvement of the Einstein-Hilbert action is performed which promotes Newton's constant and the cosmological constant to scalar functions on spacetime. They arise from solutions of an exact RG equation by means of a 'cutoff identification' which associates RG scales to the points of spacetime. The resulting modified Einstein equations for spherically symmetric, static spacetimes are derived and analyzed in detail. The modifications of the Newtonian limit due to the RG evolution are obtained for the general case. As an application, the viability of a scenario is investigated where strong quantum effects in the infrared cause Newton's constant to grow at large (astrophysical) distances. For two specific RG trajectories exact vacuum spacetimes modifying the Schwarzschild metric are obtained by means of a solution-generating Weyl transformation. Their possible relevance to the problem of the observed approximately flat galaxy rotation curves is discussed. It is found that a power law running of Newton's constant with a small exponent of the order 10 -6 would account for their non-Keplerian behavior without having to postulate the presence of any dark matter in the galactic halo
Camera-pose estimation via projective Newton optimization on the manifold.
Sarkis, Michel; Diepold, Klaus
Determining the pose of a moving camera is an important task in computer vision. In this paper, we derive a projective Newton algorithm on the manifold to refine the pose estimate of a camera. The main idea is to benefit from the fact that the 3-D rigid motion is described by the special Euclidean group, which is a Riemannian manifold. The latter is equipped with a tangent space defined by the corresponding Lie algebra. This enables us to compute the optimization direction, i.e., the gradient and the Hessian, at each iteration of the projective Newton scheme on the tangent space of the manifold. Then, the motion is updated by projecting back the variables on the manifold itself. We also derive another version of the algorithm that employs homeomorphic parameterization to the special Euclidean group. We test the algorithm on several simulated and real image data sets. Compared with the standard Newton minimization scheme, we are now able to obtain the full numerical formula of the Hessian with a 60% decrease in computational complexity. Compared with Levenberg-Marquardt, the results obtained are more accurate while having a rather similar complexity.
Images of Famous People in Political Advertising: Youth Perceptions and Position
Liubov I. Ryumshina
Full Text Available The article touches upon the issues of youth awareness about the recent political events and its attitude to the political advertising, examines youth perception and position in respect of famous people, as well as candidates participation in political advertising on the example of the last presidential campaign. The study describes the most attractive elements of the political advertising for youth
Forensic Analysis of Blue Ball point Pen Inks Using Ultraviolet-Visible Spectrometer and Ultra-Performance Liquid Chromatograph
Lee, L.C.; Shandu, K.T.S.; Nor Syahirah Mohamad Razi; Ab Aziz Ishak; Khairul Osman
Twelve varieties of blue ball point pens were selected and analyzed using UV-Vis spectrometer and ultra-performance liquid chromatography (UPLC). The aim of the study was to determine discrimination power (DP) of these methods in differentiating pen inks collected from the market in Malaysia. Discrimination analysis of 66 possible pen-pair of blue ball point pens was carried out via one-way ANOVA based on obtained chromatogram and spectra. A total of 18 peaks were determined as coming from inks based on the chromatographic data extracted at three different wavelengths (279, 370 and 400 nm). While for the UV-Vis spectrometer analysis, presence of peaks at 303, 545, 577 and 584 nm wavelengths were recorded. UV-Vis spectral data were mainly produced by the colorant components (for example, dyes) found in inks and UPLC may detect ink components other than dyes, for example, additives. As conclusion, the DP for UV-Vis and UPLC were determined to be 72.12 % and 98.48 %, respectively. This manuscript demonstrates the potential of UPLC for discriminating pen inks based on non-dye components. Additionally, the dye components in inks do not seem to play important role in discrimination of pen inks. (author)
Growth of long triisopropylsilylethynyl pentacene (TIPS-PEN) nanofibrils in a polymer thin film during spin-coating.
Park, Minwoo; Min, Yuho; Lee, Yu-Jeong; Jeong, Unyong
This study demonstrates the growth of long triisopropylsilyethynyl pentacene (TIPS-PEN) nanofibrils in a thin film of a crystalline polymer, poly(ε-caprolactone) (PCL). During spin-coating, TIPS-PEN molecules are locally extracted around the PCL grain boundaries and they crystallize into [010] direction forming long nanofibrils. Molecular weight of PCL and weight fraction (α) of TIPS-PEN in PCL matrix are key factors to the growth of nanofibrils. Long high-quality TIPS-PEN nanofibrils are obtained with high-molecular-weight PCL and at the α values in the range of 0.03-0.1. The long nanofibrils are used as an active layer in a field-effect organic transistor. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
MPPT for Photovoltaic Modules via Newton-Like Extremum Seeking Control
Ramon Leyva
Full Text Available The paper adapts the Newton-like Extremum-Seeking Control technique to extract the maximum power from photovoltaic panels. This technique uses the gradient and Hessian of the panel characteristic in order to approximate the operating point to its optimum. The paper describes in detail the gradient and Hessian estimations carried out by means of sinusoidal dithering signals. Furthermore, we compare the proposed technique with the common Extremum Seeking Control that only uses the gradient. The comparison is done by means of PSIM simulations and it shows the different transient behaviors and the faster response of the Newton-like Extremum-Seeking Control solution.
Newton-Cartan supergravity with torsion and Schrödinger supergravity
Bergshoeff, Eric; Rosseel, Jan; Zojer, Thomas
We derive a torsionfull version of three-dimensional N=2 Newton-Cartan supergravity using a non-relativistic notion of the superconformal tensor calculus. The "superconformal� theory that we start with is Schrödinger supergravity which we obtain by gauging the Schrödinger superalgebra. We present two non-relativistic N=2 matter multiplets that can be used as compensators in the superconformal calculus. They lead to two different off-shell formulations which, in analogy with the relativistic case, we call "old minimal� and "new minimal� Newton-Cartan supergravity. We find similarities but also point out some differences with respect to the relativistic case.
Bergshoeff, Eric [Van Swinderen Institute for Particle Physics and Gravity, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands); Rosseel, Jan [Institute for Theoretical Physics, Vienna University of Technology,Wiedner Hauptstr. 8-10/136, A-1040 Vienna (Austria); Albert Einstein Center for Fundamental Physics, University of Bern,Sidlerstrasse 5, 3012 Bern (Switzerland); Zojer, Thomas [Van Swinderen Institute for Particle Physics and Gravity, University of Groningen,Nijenborgh 4, 9747 AG Groningen (Netherlands)
Enlarging the bounds of moral philosophy: Why did Isaac Newton conclude the Opticks the way he did?
Henry, John
This paper draws attention to the remarkable closing words of Isaac Newton's Optice (1706) and subsequent editions of the Opticks (1718, 1721), and tries to suggest why Newton chose to conclude his book with a puzzling allusion to his own unpublished conclusions about the history of religion. Newton suggests in this concluding passage that the bounds of moral philosophy will be enlarged as natural philosophy is 'perfected'. Asking what Newton might have had in mind, the paper first considers the idea that he was foreshadowing the 'moral Newtonianism' developed later in the eighteenth century; then it considers the idea that he was perhaps pointing to developments in natural theology. Finally, the paper suggests that Newton wanted to at least signal the importance of attempting to recover the true original religion, and perhaps was hinting at his intention to publish his own extensive research on the history of the Church.
Newton-Cartan supergravity with torsion and Schrodinger supergravity
We derive a torsionfull version of three-dimensional N - 2 Newton-Cartan supergravity using a non-relativistic notion of the superconformal tensor calculus. The "superconformal" theory that we start with is Schrodinger supergravity which we obtain by gauging the Schrodinger superalgebra. We present
Torsional Newton-Cartan geometry and the Schrodinger algebra
Bergshoeff, Eric A.; Hartong, Jelle; Rosseel, Jan
We show that by gauging the Schrodinger algebra with critical exponent z and imposing suitable curvature constraints, that make diffeomorphisms equivalent to time and space translations, one obtains a geometric structure known as (twistless) torsional Newton-Cartan geometry (TTNC). This is a version
A Penning-assisted subkilovolt coaxial plasma source
Wang Zhehui; Beinke, Paul D.; Barnes, Cris W.; Martin, Michael W.; Mignardot, Edward; Wurden, Glen A.; Hsu, Scott C.; Intrator, Thomas P.; Munson, Carter P.
A Penning-assisted 20 MW coaxial plasma source (plasma gun), which can achieve breakdown at sub-kV voltages, is described. The minimum breakdown voltage is about 400 V, significantly lower than previously reported values of 1-5 kV. The Penning region for electrons is created using a permanent magnet assembly, which is mounted to the inside of the cathode of the coaxial plasma source. A theoretical model for the breakdown is given. A 900 V 0.5 F capacitor bank supplies energy for gas breakdown and plasma sustainment from 4 to 6 ms duration. Typical peak gun current is about 100 kA and gun voltage between anode and cathode after breakdown is about 200 V. A circuit model is used to understand the current-voltage characteristics of the coaxial gun plasma. Energy deposited into the plasma accounts for about 60% of the total capacitor bank energy. This plasma source is uniquely suitable for studying multi-MW multi-ms plasmas with sub-MJ capacitor bank energy
Emilie du Châtelet between Leibniz and Newton
Hagengruber, Ruth
This book describes Emilie du Chatelet known as "Emilia Newtonmania", and her innovative and outstanding position within the controversy between Newton and Leibniz, one of the fundamental scientific discourses of her time.
Technical note: whole-pen assessments of nutrient excretion and digestibility from dairy replacement heifers housed in sand-bedded freestalls.
Coblentz, W K; Hoffman, P C; Esser, N M; Bertram, M G
Our objectives were to describe and test refined procedures for quantifying excreta produced from whole pens of dairy heifers. Previous research efforts attempting to make whole-pen measurements of excreta output have been complicated by the use of organic bedding, which requires cumbersome analytical techniques to quantify excreta apart from the bedding. Research pens equipped with sand-bedded freestalls offer a unique opportunity for refinement of whole-pen fecal collection methods, primarily because sand-bedded freestall systems contain no organic bedding; therefore, concentrations of ash within the manure, sand, and feces can be used to correct for contamination of manure by sand bedding. This study was conducted on a subset of heifers from a larger production-scale feeding trial evaluating ensiled eastern gamagrass [Tripsacum dactyloides (L.) L.] haylage (EGG) that was incorporated into a corn silage/alfalfa haylage-based blended diet at rates of 0, 9.1, 18.3, or 27.4% of total DM. The diet without EGG also was offered on a limit-fed basis. Eighty Holstein dairy heifers were blocked (heavy weight, 424 ± 15.9 kg; light weight, 324 ± 22.4 kg) and then assigned to 10 individual pens containing 8 heifers/pen. One pen per block was assigned to each of the 5 research diets, and whole-pen fecal collections were conducted twice for each pen. Grab fecal samples also were gathered from individual heifers within each pen, and subsequent analysis of these whole-pen composites allowed reasonable estimates of OM and NDF excreta output. Under the conditions of our experimental design, pooled SEM for the excreta DM, OM, NDF, and NDF (ash corrected) output were 0.113, 0.085, 0.093, and 0.075 kg·heifer(-1)·d(-1), respectively. For DM excretion, this represented about one-third of the SEM reported for previous whole-pen collections from bedded-pack housing systems. Subsequent calculations of apparent DM and OM digestibilities indicated that the technique was sensitive, and
3D CSEM data inversion using Newton and Halley class methods
Amaya, M.; Hansen, K. R.; Morten, J. P.
For the first time in 3D controlled source electromagnetic data inversion, we explore the use of the Newton and the Halley optimization methods, which may show their potential when the cost function has a complex topology. The inversion is formulated as a constrained nonlinear least-squares problem which is solved by iterative optimization. These methods require the derivatives up to second order of the residuals with respect to model parameters. We show how Green's functions determine the high-order derivatives, and develop a diagrammatical representation of the residual derivatives. The Green's functions are efficiently calculated on-the-fly, making use of a finite-difference frequency-domain forward modelling code based on a multi-frontal sparse direct solver. This allow us to build the second-order derivatives of the residuals keeping the memory cost in the same order as in a Gauss-Newton (GN) scheme. Model updates are computed with a trust-region based conjugate-gradient solver which does not require the computation of a stabilizer. We present inversion results for a synthetic survey and compare the GN, Newton, and super-Halley optimization schemes, and consider two different approaches to set the initial trust-region radius. Our analysis shows that the Newton and super-Halley schemes, using the same regularization configuration, add significant information to the inversion so that the convergence is reached by different paths. In our simple resistivity model examples, the convergence speed of the Newton and the super-Halley schemes are either similar or slightly superior with respect to the convergence speed of the GN scheme, close to the minimum of the cost function. Due to the current noise levels and other measurement inaccuracies in geophysical investigations, this advantageous behaviour is at present of low consequence, but may, with the further improvement of geophysical data acquisition, be an argument for more accurate higher-order methods like those
The Doppler effect and the three most famous experiments for special relativity
Klinaku, Shukri
Using the general formula for the Doppler effect at any arbitrary angle, the three famous experiments for special theory of relativity will be examined. Explanation of the experiments of Michelson, Kennedy-Thorndike and Ives-Stilwell will be given in a precise and elegant way without postulates, arbitrary assumptions or approximations.
Modified Block Newton method for the lambda modes problem
González-Pintor, S., E-mail: [email protected] [Departamento de Ingeniería Química y Nuclear, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Ginestar, D., E-mail: [email protected] [Instituto de Matemática Multidisciplinar, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain); Verdú, G., E-mail: [email protected] [Departamento de Ingeniería Química y Nuclear, Universidad Politécnica de Valencia, Camino de Vera 14, 46022 Valencia (Spain)
Highlights: â–º The Modal Method is based on expanding the solution in a set of dominant modes. â–º Updating the set of dominant modes improve its performance. â–º A Modified Block Newton Method, which use previous calculated modes, is proposed. â–º The method exhibits a very good local convergence with few iterations. â–º Good performance results are also obtained for heavy perturbations. -- Abstract: To study the behaviour of nuclear power reactors it is necessary to solve the time dependent neutron diffusion equation using either a rectangular mesh for PWR and BWR reactors or a hexagonal mesh for VVER reactors. This problem can be solved by means of a modal method, which uses a set of dominant modes to expand the neutron flux. For the transient calculations using the modal method with a moderate number of modes, these modes must be updated each time step to maintain the accuracy of the solution. The updating modes process is also interesting to study perturbed configurations of a reactor. A Modified Block Newton method is studied to update the modes. The performance of the Newton method has been tested for a steady state perturbation analysis of two 2D hexagonal reactors, a perturbed configuration of the IAEA PWR 3D reactor and two configurations associated with a boron dilution transient in a BWR reactor.
From Pen-and-Paper Sketches to Prototypes: The Advanced Interaction Design Environment
Störrle, Harald
Pen and paper is still the best tool for sketching GUIs. How-ever, sketches cannot be executed, at best we have facilitated or animated scenarios. The Advanced User Interaction Environment facilitates turn-ing hand-drawn sketches into executable prototypes.......Pen and paper is still the best tool for sketching GUIs. How-ever, sketches cannot be executed, at best we have facilitated or animated scenarios. The Advanced User Interaction Environment facilitates turn-ing hand-drawn sketches into executable prototypes....
A smooth generalized Newton method for a class of non-smooth equations
Uko, L. U.
This paper presents a Newton-type iterative scheme for finding the zero of the sum of a differentiable function and a multivalued maximal monotone function. Local and semi-local convergence results are proved for the Newton scheme, and an analogue of the Kantorovich theorem is proved for the associated modified scheme that uses only one Jacobian evaluation for the entire iteration. Applications in variational inequalities are discussed, and an illustrative numerical example is given. (author). 24 refs
Control of the conformations of ion Coulomb crystals in a Penning trap
Mavadia, Sandeep; Goodwin, Joseph F.; Stutter, Graham; Bharadia, Shailen; Crick, Daniel R.; Segal, Daniel M.; Thompson, Richard C.
Laser-cooled atomic ions form ordered structures in radiofrequency ion traps and in Penning traps. Here we demonstrate in a Penning trap the creation and manipulation of a wide variety of ion Coulomb crystals formed from small numbers of ions. The configuration can be changed from a linear string, through intermediate geometries, to a planar structure. The transition from a linear string to a zigzag geometry is observed for the first time in a Penning trap. The conformations of the crystals are set by the applied trap potential and the laser parameters, and agree with simulations. These simulations indicate that the rotation frequency of a small crystal is mainly determined by the laser parameters, independent of the number of ions and the axial confinement strength. This system has potential applications for quantum simulation, quantum information processing and tests of fundamental physics models from quantum field theory to cosmology. PMID:24096901
Digital assist: A comparison of two note-taking methods (traditional vs. digital pen) for students with emotional behavioral disorders
Rody, Carlotta A.
High school biology classes traditionally follow a lecture format to disseminate content and new terminology. With the inclusive practices of No Child Left Behind, the Common Core State Standards, and end-of-course exam requirement for high school diplomas, classes include a large range of achievement levels and abilities. Teachers assume, often incorrectly, that students come to class prepared to listen and take notes. In a standard diploma, high school biology class in a separate school for students with emotional and behavioral disorders, five students participated in a single-subject, alternating treatment design study that compared the use of regular pens and digital pens to take notes during 21 lecture sessions. Behavior measures were threefold between the two interventions: (a) quantity of notes taken per minute during lectures, (b) quantity of notes or notations taken during review pauses, and (c) percent of correct responses on the daily comprehension quizzes. The study's data indicated that two students were inclined to take more lecture notes when using the digital pen. Two students took more notes with the regular pen. One student demonstrated no difference in her performance with either pen type. Both female students took more notes per minute, on average, than the three males regardless of pen type. During the review pause, three of the five students only added notes or notations to their notes when using the regular pen. The remaining two students did not add to their notes. Quiz scores differed in favor of the regular pen. All five participants earned higher scores on quizzes given during regular pen sessions. However, the differences were minor, and recommendations are made for specific training in note-taking, the pause strategy, and digital pen fluency which may produce different results for both note-taking and quiz scores.
Transport of three veterinary antimicrobials from feedlot pens via simulated rainfall runoff.
Sura, Srinivas; Degenhardt, Dani; Cessna, Allan J; Larney, Francis J; Olson, Andrew F; McAllister, Tim A
Veterinary antimicrobials are introduced to wider environments by manure application to agricultural fields or through leaching or runoff from manure storage areas (feedlots, stockpiles, windrows, lagoons). Detected in manure, manure-treated soils, and surface and ground water near intensive cattle feeding operations, there is a concern that environmental contamination by these chemicals may promote the development of antimicrobial resistance in bacteria. Surface runoff and leaching appear to be major transport pathways by which veterinary antimicrobials eventually contaminate surface and ground water, respectively. A study was conducted to investigate the transport of three veterinary antimicrobials (chlortetracycline, sulfamethazine, tylosin), commonly used in beef cattle production, in simulated rainfall runoff from feedlot pens. Mean concentrations of veterinary antimicrobials were 1.4 to 3.5 times higher in surface material from bedding vs. non-bedding pen areas. Runoff rates and volumetric runoff coefficients were similar across all treatments but both were significantly higher from non-bedding (0.53Lmin(-1); 0.27) than bedding areas (0.40Lmin(-1); 0.19). In keeping with concentrations in pen surface material, mean concentrations of veterinary antimicrobials were 1.4 to 2.5 times higher in runoff generated from bedding vs. non-bedding pen areas. Water solubility and sorption coefficient of antimicrobials played a role in their transport in runoff. Estimated amounts of chlortetracycline, sulfamethazine, and tylosin that could potentially be transported to the feedlot catch basin during a one in 100-year precipitation event were 1.3 to 3.6ghead(-1), 1.9ghead(-1), and 0.2ghead(-1), respectively. This study demonstrates the magnitude of veterinary antimicrobial transport in feedlot pen runoff and supports the necessity of catch basins for runoff containment within feedlots. Crown Copyright © 2015. Published by Elsevier B.V. All rights reserved.
Pen confinement of yearling ewes with cows or heifers for 14 days to produce bonded sheep.
Fredrickson, E L.; Anderson, D M.; Estell, R E.; Havstad, K M.; Shupe, W L.; Remmenga, M D.
Mixed species stocking is commonly a more ecologically sound and efficient use of forage resources than single species stocking, especially in pastures having complex assemblages of forage species. However, in many environments livestock predation on especially smaller ruminants adds an extra challenge to mixed species stocking. When mixed sheep and cattle remain consistently as a cohesive group (flerd), predation risks are lessened, while fencing and herding costs are reduced. To establish a cohesive group (bond), a 30-day bonding period in which young sheep and cattle pairs are penned together is currently recommended. The purpose of this research was to test if a bond could be produced in open field test. Other behaviors were also noted and recorded during the field test. Separation distance did not differ (P=0.973) between the PC and PH treatments; however, separation distance for NC versus NH (Ptime grazing and less time walking than animals not previously penned for 14 days. Penned animals also vocalized less than non-penned animals during the open field test. The bond sheep formed to the bovines was not affected by cow age. These data suggest that inter-specific bond formation using pen confinement can be accomplished within 14 days, representing a 53% savings in time and associated costs when compared to pen confinement lasting 30 days.
Insulin pen needles: effects of extra-thin wall needle technology on preference, confidence, and other patient ratings.
Aronson, Ronnie; Gibney, Michael A; Oza, Kunjal; Bérubé, Julie; Kassler-Taub, Kenneth; Hirsch, Laurence
Pen needles (PNs) are essential for insulin injections using pen devices. PN characteristics affect patients' injection experience. The goal of this study was to evaluate the impact of a new extra-thin wall (XTW) PN versus usual PNs on overall patient preference, ease of injection, perceived time to complete the full dose, thumb button force to deliver the injection, and dose delivery confidence in individuals with diabetes mellitus (DM). Subjects injected insulin with the KwikPen(TM) (Eli Lilly and Company, Indianapolis, Indiana), SoloSTAR(®) (sanofi-aventis U.S. LLC, Bridgewater, New Jersey), and FlexPen(®) (Novo Nordisk A/S, Bagsvaerd, Denmark) insulin pens, and included some with impaired hand dexterity. We first performed quantitative testing of XTW and comparable PNs with the 3 insulin pens for thumb force, flow rate, and time to deliver medication. A prospective, randomized, 2-period, open-label, crossover trial was then conducted in patients aged 35 to 80 years with type 1 or type 2 DM who injected insulin by pen for ≥2 months, with at least 1 daily dose ≥10 U. Patients who used 4- to 8-mm length PNs with 31- to 32-G diameter were randomly assigned to use their current PN or the same/similar size XTW PN at home for ~1 week and the other PN the second week. They completed several comparative 150-mm visual analog scales and direct questions at the end of period 2. XTW PNs had statistically significant better performance for each studied PN characteristic (thumb force, flow, and time to deliver medication) for all pens combined and each individual pen brand (all, P ≤ 0.05). Of 216 patients randomized to study groups (80, SoloSTAR; 77, FlexPen; 59, KwikPen), 209 completed both periods; 198 were evaluable. Baseline characteristics revealed a mean (SD) age of 60.8 (9.3) years, insulin pen use duration of 4.3 (4.1) years, and mean total daily dose of 75.1 (52.3) U (range, 10-420 U). Approximately 50% of patients were female; 81.5% were white and 14.8% were
Porobic, T.; Beck, M.; Breitenfeldt, M.; Couratin, C.; Finlay, P.; Knecht, A.; Fabian, X.; Friedag, P.; Flechard, X.; Lienard, E.; Ban, G.; Zákoucký, Dalibor; Soti, G.; Van Gorp, S.; Weinheimer, C.; Wursten, E.; Severijns, N.
Ro�. 785, JUN (2015), s. 153-162 ISSN 0168-9002 R&D Projects: GA MŠk LA08015; GA MŠk(CZ) LG13031 Institutional support: RVO:61389005 Keywords : Penning trap * space-charge * magnetron motion * ion trapping * buffer gas cooling * ion cyclotron resonance Subject RIV: BG - Nuclear, Atomic and Molecular Physics, Colliders Impact factor: 1.200, year: 2015
Propylthiouracil Attenuates Experimental Pulmonary Hypertension via Suppression of Pen-2, a Key Component of Gamma-Secretase.
Ying-Ju Lai
Full Text Available Gamma-secretase-mediated Notch3 signaling is involved in smooth muscle cell (SMC hyper-activity and proliferation leading to pulmonary arterial hypertension (PAH. In addition, Propylthiouracil (PTU, beyond its anti-thyroid action, has suppressive effects on atherosclerosis and PAH. Here, we investigated the possible involvement of gamma-secretase-mediated Notch3 signaling in PTU-inhibited PAH. In rats with monocrotaline-induced PAH, PTU therapy improved pulmonary arterial hypertrophy and hemodynamics. In vitro, treatment of PASMCs from monocrotaline-treated rats with PTU inhibited their proliferation and migration. Immunocyto, histochemistry, and western blot showed that PTU treatment attenuated the activation of Notch3 signaling in PASMCs from monocrotaline-treated rats, which was mediated via inhibition of gamma-secretase expression especially its presenilin enhancer 2 (Pen-2 subunit. Furthermore, over-expression of Pen-2 in PASMCs from control rats increased the capacity of migration, whereas knockdown of Pen-2 with its respective siRNA in PASMCs from monocrotaline-treated rats had an opposite effect. Transfection of PASMCs from monocrotaline-treated rats with Pen-2 siRNA blocked the inhibitory effect of PTU on PASMC proliferation and migration, reflecting the crucial role of Pen-2 in PTU effect. We present a novel cell-signaling paradigm in which overexpression of Pen-2 is essential for experimental pulmonary arterial hypertension to promote motility and growth of smooth muscle cells. Propylthiouracil attenuates experimental PAH via suppression of the gamma-secretase-mediated Notch3 signaling especially its presenilin enhancer 2 (Pen-2 subunit. These findings provide a deep insight into the pathogenesis of PAH and a novel therapeutic strategy.
Heat kernel for Newton-Cartan trace anomalies
Auzzi, Roberto [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); INFN Sezione di Perugia, Via A. Pascoli, Perugia, 06123 (Italy); Nardelli, Giuseppe [Dipartimento di Matematica e Fisica, Università Cattolica del Sacro Cuore, Via Musei 41, Brescia, 25121 (Italy); TIFPA - INFN, Università di Trento,c/o Dipartimento di Fisica, Povo, TN, 38123 (Italy)
We compute the leading part of the trace anomaly for a free non-relativistic scalar in 2+1 dimensions coupled to a background Newton-Cartan metric. The anomaly is proportional to 1/m, where m is the mass of the scalar. We comment on the implications of a conjectured a-theorem for non-relativistic theories with boost invariance.
Newton's First Law: A Learning Cycle Approach
McCarthy, Deborah
To demonstrate how Newton's first law of motion applies to students' everyday lives, the author developed a learning cycle series of activities on inertia. The discrepant event at the heart of these activities is sure to elicit wide-eyed stares and puzzled looks from students, but also promote critical thinking and help bring an abstract concept…
CAIXA: a catalogue of AGN in the XMM-Newton archive. III. Excess variance analysis
Ponti, G.; Papadakis, I.; Bianchi, S.; Guainazzi, M.; Matt, G.; Uttley, P.; Bonilla, N.F.
Context. We report on the results of the first XMM-Newton systematic "excess variance" study of all the radio quiet, X-ray un-obscured AGN. The entire sample consist of 161 sources observed by XMM-Newton for more than 10 ks in pointed observations, which is the largest sample used so far to study
Prevalence of digital dermatitis in young stock in Alberta, Canada, using pen walks.
Jacobs, C; Orsel, K; Barkema, H W
Digital dermatitis (DD), an infectious bacterial foot lesion prevalent in dairy cattle worldwide, reduces both animal welfare and production. This disease was recently identified in replacement dairy heifers, with implications including increased risk of DD and decreased milk production in first lactation, poor reproductive performance, and altered hoof conformation. Therefore, a simple and effective method is needed to identify DD in young stock and to determine risk factors for DD in this group so that effective control strategies can be implemented. The objectives of this study were to (1) determine prevalence of DD in young stock (based on pen walks); and (2) identify potential risk factors for DD in young stock. A cross-sectional study was conducted on 28 dairy farms in Alberta, Canada; pen walks were used to identify DD (present/absent) on the hind feet of group-housed, young dairy stock. A subset of 583 young stock on 5 farms were selected for chute inspection of feet to determine the accuracy of pen walks for DD detection. Pen walks as a means of identifying DD lesions on the hind feet in young stock had sensitivity and specificity at the animal level of 65 and 98%, with positive and negative predictive values of 94 and 83%, respectively, at a prevalence of 37%. At the foot level, pen walks had sensitivity and specificity of 62 and 98%, respectively, with positive and negative predictive values of 92 and 88%, respectively, at a prevalence of 26%. Pen walks identified DD in 79 [2.9%; 95% confidence interval (95% CI): 2.3-3.6%] of 2,815 young stock on 11 (39%; 95% CI: 22-59%) of 28 farms, with all 79 DD-positive young stock ≥309 d of age. Apparent within-herd prevalence estimates ranged from 0 to 9.3%, with a mean of 1.4%. True within-herd prevalence of DD in young stock, calculated using the sensitivity and specificity of the pen walks, ranged from 0 to 12.6%, with a mean of 1.4%. On the 11 DD-positive farms, the proportion of young stock >12 mo of age
Improved Quasi-Newton method via PSB update for solving systems of nonlinear equations
Mamat, Mustafa; Dauda, M. K.; Waziri, M. Y.; Ahmad, Fadhilah; Mohamad, Fatma Susilawati
The Newton method has some shortcomings which includes computation of the Jacobian matrix which may be difficult or even impossible to compute and solving the Newton system in every iteration. Also, the common setback with some quasi-Newton methods is that they need to compute and store an n × n matrix at each iteration, this is computationally costly for large scale problems. To overcome such drawbacks, an improved Method for solving systems of nonlinear equations via PSB (Powell-Symmetric-Broyden) update is proposed. In the proposed method, the approximate Jacobian inverse Hk of PSB is updated and its efficiency has improved thereby require low memory storage, hence the main aim of this paper. The preliminary numerical results show that the proposed method is practically efficient when applied on some benchmark problems.
What Makes Your Writing Style Unique? Significant Differences Between Two Famous Romanian Orators
Dascalu, Mihai; Gifu, Daniela; Trausan-Matu, Stefan
This paper introduces a novel, in-depth approach of analyzing the differences in writing style between two famous Romanian orators, based on automated textual complexity indices for Romanian language. The considered authors are: (a) Mihai Eminescu, Romania's national poet and a
Structural modifications of swift heavy ion irradiated PEN probed by optical and thermal measurements
Devgan, Kusum; Singh, Lakhwant; Samra, Kawaljeet Singh
Highlights: • The present paper reports the effect of swift heavy ion irradiation on Polyethylene Naphthalate (PEN). • Swift heavy ion irradiation introduces structural modification and degradation of PEN at different doses. • Lower irradiation doses in PEN result in modification of structural properties and higher doses lead to complete degradation. • Strong correlation between structural, optical, and thermal properties. - Abstract: The effects of swift heavy ion irradiation on the structural characteristics of Polyethylene naphthalate (PEN) were studied. Samples were irradiated in vacuum at room temperature by lithium (50 MeV), carbon (85 MeV), nickel (120 MeV) and silver (120 MeV) ions with the fluence in the range of 1×10 11 –3×10 12 ions cm −2 . Ion induced changes were analyzed using X-ray diffraction (XRD), Fourier transform infra red (FT-IR), UV–visible spectroscopy, thermo-gravimetric analysis (TGA) and differential scanning calorimetry (DSC) techniques. Cross-linking was observed at lower doses resulting in modification of structural properties, however higher doses lead to the degradation of the investigated polymeric samples
Newton-Hooke spacetimes, Hpp-waves and the cosmological constant
Gibbons, G W; Patricot, C E
We show explicitly how the Newton-Hooke groups N ± 10 act as symmetries of the equations of motion of non-relativistic cosmological models with a cosmological constant. We give the action on the associated non-relativistic spacetimes M ± 4 and show how these may be obtained from a null reduction of five-dimensional homogeneous pp-wave Lorentzian spacetimes M ± 5 . This allows us to realize the Newton-Hooke groups and their Bargmann-type central extensions as subgroups of the isometry groups of M ± 5 . The extended Schroedinger-type conformal group is identified and its action on the equations of motion given. The non-relativistic conformal symmetries also have applications to time-dependent harmonic oscillators. Finally we comment on a possible application to Gao's generalization of the matrix model
An EPR at Penly: an outline from the SFEN to feed the public debate
Penly-3 is the project to build an EPR reactor as a third unit on the Penly site (France). The authors have reviewed 5 reasons to back it: 1) nuclear power is a useful source of energy at the world scale, 2) nuclear power is an adequate solution to meet our future needs of energy, 3) the EPR is at the top of today's nuclear technology, 4) nuclear power is an efficient tool to diminish CO 2 releases, and 5) The EPR is a valuable asset to maintain France in the top group of world actors in the nuclear sector. A public debate will be held soon concerning the Penly-3 project. (A.C.)
Study on the algorithm for Newton-Rapson iteration interpolation of NURBS curve and simulation
Zhang, Wanjun; Gao, Shanping; Cheng, Xiyan; Zhang, Feng
In order to solve the problems of Newton-Rapson iteration interpolation method of NURBS Curve, Such as interpolation time bigger, calculation more complicated, and NURBS curve step error are not easy changed and so on. This paper proposed a study on the algorithm for Newton-Rapson iteration interpolation method of NURBS curve and simulation. We can use Newton-Rapson iterative that calculate (xi, yi, zi). Simulation results show that the proposed NURBS curve interpolator meet the high-speed and high-accuracy interpolation requirements of CNC systems. The interpolation of NURBS curve should be finished. The simulation results show that the algorithm is correct; it is consistent with a NURBS curve interpolation requirements.
Reducing alarms and prioritising interventions in pig production by simultaneous monitoring of water consumption in multiple pens
Dominiak, Katarina Nielsen; Hindsborg, Jeff; Pedersen, L. J.
to be generated at pen level or merged at section or herd level to reduce the number of alarms. Information on which specific pens or sections are of higher risk of stress or diseases is communicated to the farmer and target work effort to pens at risk. For Herd A, all model parameters defined at section level...... resulted in the best fit (MSE =13.85 litres2/hour). For Herd B, parameters defined at both pen and section level resulted in the best fit (MSE = 1.47 litres2/hour). For both Herd A and Herd B, preliminary results support the spatial approach by generating a reduced number of alarms when comparing section...
The XMM-Newton Science Archive and its integration into ESASky
Loiseau, N.; Baines, D.; Colomo, E.; Giordano, F.; Merín, B.; Racero, E.; Rodríguez, P.; Salgado, J.; Sarmiento, M.
We describe the variety of functionalities of the XSA (XMM-Newton Science Archive) that allow to search and access the XMM-Newton data and catalogues. The web interface http://nxsa.esac.esa.int/ is very flexible allowing different kinds of searches by a single position or target name, or by a list of targets, with several selecting options (target type, text in the abstract, etc.), and with several display options. The resulting data can be easily broadcast to Virtual Observatory (VO) facilities for a first look analysis, or for cross-matching the results with info from other observatories. Direct access via URL or command line are also possible for scripts usage, or to link XMM-Newton data from other interfaces like Vizier, ADS, etc. The full metadata content of the XSA can be queried through the TAP (Table access Protocol) via ADQL (Astronomical Data Query Language). We present also the roadmap for future improvements of the XSA including the integration of the Upper Limit server, the on-the-fly data analysis, and the interactive visualization of EPIC sources spectra and light curves and RGS spectra, among other advanced features. Within this modern visualization philosophy XSA is also being integrated into ESASky (http://sky.esa.int). ESASky is the science-driven multi-wavelength discovery portal for all the ESA Astronomy Missions (Integral, HST, Herschel, Suzaku, Planck, etc.), and other space and ground telescope data. The system offers progressive multi-resolution all-sky projections of full mission datasets using HiPS, a new generation of HEALPix projections developed by CDS, precise footprints to connect to individual observations, and direct access to science-ready data from the underlying mission specific science archives. XMM-Newton EPIC and OM all-sky HiPS maps, catalogues and links to the observations are available through ESASky.
The continuous, desingularized Newton method for meromorphic functions
For any (nonconstant) meromorphic function, we present a real analytic dynamical system, which may be interpreted as an infinitesimal version of Newton's method for finding its zeros. A fairly complete description of the local and global features of the phase portrait of such a system is obtained
Easy XMM-Newton Data Analysis with the Streamlined ABC Guide!
Valencic, Lynne A.; Snowden, Steven L.; Pence, William D.
The US XMM-Newton GOF has streamlined the time-honored XMM-Newton ABC Guide, making it easier to find and use what users may need to analyze their data. It takes into account what type of data a user might have, if they want to reduce the data on their own machine or over the internet with Web Hera, and if they prefer to use the command window or a GUI. The GOF has also included an introduction to analyzing EPIC and RGS spectra, and PN Timing mode data. The guide is provided for free to students, educators, and researchers for educational and research purposes. Try it out at: http://heasarc.gsfc.nasa.gov/docs/xmm/sl/intro.html
A Prototype of Mathematical Treatment of Pen Pressure Data for Signature Verification.
Li, Chi-Keung; Wong, Siu-Kay; Chim, Lai-Chu Joyce
A prototype using simple mathematical treatment of the pen pressure data recorded by a digital pen movement recording device was derived. In this study, a total of 48 sets of signature and initial specimens were collected. Pearson's correlation coefficient was used to compare the data of the pen pressure patterns. From the 820 pair comparisons of the 48 sets of genuine signatures, a high degree of matching was found in which 95.4% (782 pairs) and 80% (656 pairs) had rPA > 0.7 and rPA > 0.8, respectively. In the comparison of the 23 forged signatures with their corresponding control signatures, 20 of them (89.2% of pairs) had rPA values prototype could be used as a complementary technique to improve the objectivity of signature examination and also has a good potential to be developed as a tool for automated signature identification. © 2017 American Academy of Forensic Sciences.
A Newton-based Jacobian-free approach for neutronic-Monte Carlo/thermal-hydraulic static coupled analysis
Mylonakis, Antonios G.; Varvayanni, M.; Catsaros, N.
Highlights: •A Newton-based Jacobian-free Monte Carlo/thermal-hydraulic coupling approach is introduced. •OpenMC is coupled with COBRA-EN with a Newton-based approach. •The introduced coupling approach is tested in numerical experiments. •The performance of the new approach is compared with the traditional "serial� coupling approach. -- Abstract: In the field of nuclear reactor analysis, multi-physics calculations that account for the bonded nature of the neutronic and thermal-hydraulic phenomena are of major importance for both reactor safety and design. So far in the context of Monte-Carlo neutronic analysis a kind of "serial� algorithm has been mainly used for coupling with thermal-hydraulics. The main motivation of this work is the interest for an algorithm that could maintain the distinct treatment of the involved fields within a tight coupling context that could be translated into higher convergence rates and more stable behaviour. This work investigates the possibility of replacing the usually used "serial� iteration with an approximate Newton algorithm. The selected algorithm, called Approximate Block Newton, is actually a version of the Jacobian-free Newton Krylov method suitably modified for coupling mono-disciplinary solvers. Within this Newton scheme the linearised system is solved with a Krylov solver in order to avoid the creation of the Jacobian matrix. A coupling algorithm between Monte-Carlo neutronics and thermal-hydraulics based on the above-mentioned methodology is developed and its performance is analysed. More specifically, OpenMC, a Monte-Carlo neutronics code and COBRA-EN, a thermal-hydraulics code for sub-channel and core analysis, are merged in a coupling scheme using the Approximate Block Newton method aiming to examine the performance of this scheme and compare with that of the "traditional� serial iterative scheme. First results show a clear improvement of the convergence especially in problems where significant
Hořava-Lifshitz gravity from dynamical Newton-Cartan geometry
Hartong, Jelle; Obers, Niels A.
Recently it has been established that torsional Newton-Cartan (TNC) geometry is the appropriate geometrical framework to which non-relativistic field theories couple. We show that when these geometries are made dynamical they give rise to Hořava-Lifshitz (HL) gravity. Projectable HL gravity corresponds to dynamical Newton-Cartan (NC) geometry without torsion and non-projectable HL gravity corresponds to dynamical NC geometry with twistless torsion (hypersurface orthogonal foliation). We build a precise dictionary relating all fields (including the scalar khronon), their transformations and other properties in both HL gravity and dynamical TNC geometry. We use TNC invariance to construct the effective action for dynamical twistless torsional Newton-Cartan geometries in 2+1 dimensions for dynamical exponent 1
Hartong, Jelle [Physique Théorique et Mathématique and International Solvay Institutes, Université Libre de Bruxelles,C.P. 231, 1050 Brussels (Belgium); Obers, Niels A. [The Niels Bohr Institute, Copenhagen University,Blegdamsvej 17, DK-2100 Copenhagen Ø (Denmark)
The influence of facility and home pen design on the welfare of the laboratory-housed dog.
Scullion Hall, Laura E M; Robinson, Sally; Finch, John; Buchanan-Smith, Hannah M
We have an ethical and scientific obligation to Refine all aspects of the life of the laboratory-housed dog. Across industry there are many differences amongst facilities, home pen design and husbandry, as well as differences in features of the dogs such as strain, sex and scientific protocols. Understanding how these influence welfare, and hence scientific output is therefore critical. A significant proportion of dogs' lives are spent in the home pen and as such, the design can have a considerable impact on welfare. Although best practice guidelines exist, there is a paucity of empirical evidence to support the recommended Refinements and uptake varies across industry. In this study, we examine the effect of modern and traditional home pen design, overall facility design, husbandry, history of regulated procedures, strain and sex on welfare-indicating behaviours and mechanical pressure threshold. Six groups of dogs from two facilities (total n=46) were observed in the home pen and tested for mechanical pressure threshold. Dogs which were housed in a purpose-built modern facility or in a modern design home pen showed the fewest behavioural indicators of negative welfare (such as alert or pacing behaviours) and more indicators of positive welfare (such as resting) compared to those in a traditional home pen design or traditional facility. Welfare indicating behaviours did not vary consistently with strain, but male dogs showed more negative welfare indicating behaviours and had greater variation in these behaviours than females. Our findings showed more positive welfare indicating behaviours in dogs with higher mechanical pressure thresholds. We conclude that factors relating to the design of home pens and implementation of Refinements at the facility level have a significant positive impact on the welfare of laboratory-housed dogs, with a potential concomitant impact on scientific endpoints. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights
A multi-reference filtered-x-Newton narrowband algorithm for active isolation of vibration and experimental investigations
Wang, Chun-yu; He, Lin; Li, Yan; Shuai, Chang-geng
In engineering applications, ship machinery vibration may be induced by multiple rotational machines sharing a common vibration isolation platform and operating at the same time, and multiple sinusoidal components may be excited. These components may be located at frequencies with large differences or at very close frequencies. A multi-reference filtered-x Newton narrowband (MRFx-Newton) algorithm is proposed to control these multiple sinusoidal components in an MIMO (multiple input and multiple output) system, especially for those located at very close frequencies. The proposed MRFx-Newton algorithm can decouple and suppress multiple sinusoidal components located in the same narrow frequency band even though such components cannot be separated from each other by a narrowband-pass filter. Like the Fx-Newton algorithm, good real-time performance is also achieved by the faster convergence speed brought by the 2nd-order inverse secondary-path filter in the time domain. Experiments are also conducted to verify the feasibility and test the performance of the proposed algorithm installed in an active-passive vibration isolation system in suppressing the vibration excited by an artificial source and air compressor/s. The results show that the proposed algorithm not only has comparable convergence rate as the Fx-Newton algorithm but also has better real-time performance and robustness than the Fx-Newton algorithm in active control of the vibration induced by multiple sound sources/rotational machines working on a shared platform.
Pen-on-paper strategy for point-of-care testing: Rapid prototyping of fully written microfluidic biosensor.
Li, Zedong; Li, Fei; Xing, Yue; Liu, Zhi; You, Minli; Li, Yingchun; Wen, Ting; Qu, Zhiguo; Ling Li, Xiao; Xu, Feng
Paper-based microfluidic biosensors have recently attracted increasing attentions in point-of-care testing (POCT) territories benefiting from their affordable, accessible and eco-friendly features, where technologies for fabricating such biosensors are preferred to be equipment free, easy-to-operate and capable of rapid prototyping. In this work, we developed a pen-on-paper (PoP) strategy based on two custom-made pens, i.e., a wax pen and a conductive-ink pen, to fully write paper-based microfluidic biosensors through directly writing both microfluidic channels and electrodes. Particularly, the proposed wax pen is competent to realize one-step fabrication of wax channels on paper, as the melted wax penetrates into paper during writing process without any post-treatments. The practical applications of the fabricated paper-based microfluidic biosensors are demonstrated by both colorimetric detection of Salmonella typhimurium DNA with detection limit of 1nM and electrochemical measurement of glucose with detection limit of 1mM. The developed PoP strategy for making microfluidic biosensors on paper characterized by true simplicity, prominent portability and excellent capability for rapid prototyping shows promising prospect in POCT applications. Copyright © 2017 Elsevier B.V. All rights reserved.
Pen size and parity effects on maternal behaviour of Small-Tail Han sheep.
Lv, S-J; Yang, Y; Dwyer, C M; Li, F-K
The aim of this experiment was to study the effects of pen size and parity on maternal behaviour of twin-bearing Small-Tail Han ewes. A total of 24 ewes were allocated to a 2×2 design (six per pen), with parity (primiparous or multiparous) and pen size (large: 6.0×3.0 m; small: 6.0×1.5 m) as main effects at Linyi University, Shandong Province, China. Behaviour was observed from after parturition until weaning. All ewes were observed for 6 h every 5 days from 0700 to1000 h and from 1400 to 1700 h. Continuous focal animal sampling was used to quantify the duration of maternal behaviours: sucking, grooming and following as well as the frequency of udder accepting, udder refusing and low-pitched bleating. Oestradiol and cortisol concentrations in the faeces (collected in the morning every 5 days) were detected using EIA kits. All lambs were weighed 24 h after parturition and again at weaning at 35 days of age. The small pen size significantly reduced following (Pbehaviour in sheep during lactation. The study is also the first to report on the maternal behaviour of Chinese native sheep breeds (Small-Tail Han sheep), with implications for the production of sheep in China.
Conversion of Squid Pens to Chitosanases and Proteases via Paenibacillus sp. TKU042
Chien Thang Doan
Full Text Available Chitosanases and proteases have received much attention due to their wide range of applications. Four kinds of chitinous materials, squid pens, shrimp heads, demineralized shrimp shells and demineralized crab shells, were used as the sole carbon and nitrogen (C/N source to produce chitosanases, proteases and α-glucosidase inhibitors (αGI by four different strains of Paenibacillus. Chitosanase productivity was highest in the culture supernatants using squid pens as the sole C/N source. The maximum chitosanase activity of fermented squid pens (0.759 U/mL was compared to that of fermented shrimp heads (0.397 U/mL, demineralized shrimp shells (0.201 U/mL and demineralized crab shells (0.216 U/mL. A squid pen concentration of 0.5% was suitable for chitosanase, protease and αGI production via Paenibacillus sp. TKU042. Multi-purification, including ethanol precipitation and column chromatography of Macro-Prep High S as well as Macro-Prep DEAE (diethylaminoethyl, led to the isolation of Paenibacillus sp. TKU042 chitosanase and protease with molecular weights of 70 and 35 kDa, respectively. For comparison, 16 chitinolytic bacteria, including strains of Paenibacillus, were investigated for the production of chitinase, exochitinase, chitosanase, protease and αGI using two kinds of chitinous sources.
Physics-based preconditioning and the Newton-Krylov method for non-equilibrium radiation diffusion
Mousseau, V.A.; Knoll, D.A.; Rider, W.J.
An algorithm is presented for the solution of the time dependent reaction-diffusion systems which arise in non-equilibrium radiation diffusion applications. This system of nonlinear equations is solved by coupling three numerical methods, Jacobian-free Newton-Krylov, operator splitting, and multigrid linear solvers. An inexact Newton's method is used to solve the system of nonlinear equations. Since building the Jacobian matrix for problems of interest can be challenging, the authors employ a Jacobian-free implementation of Newton's method, where the action of the Jacobian matrix on a vector is approximated by a first order Taylor series expansion. Preconditioned generalized minimal residual (PGMRES) is the Krylov method used to solve the linear systems that come from the iterations of Newton's method. The preconditioner in this solution method is constructed using a physics-based divide and conquer approach, often referred to as operator splitting. This solution procedure inverts the scalar elliptic systems that make up the preconditioner using simple multigrid methods. The preconditioner also addresses the strong coupling between equations with local 2 x 2 block solves. The intra-cell coupling is applied after the inter-cell coupling has already been addressed by the elliptic solves. Results are presented using this solution procedure that demonstrate its efficiency while incurring minimal memory requirements
A multivariate dynamic linear model for early warnings of diarrhea and pen fouling in slaughter pigs
We present a method for providing early, but indiscriminant, predictions of diarrhea and pen fouling in grower/finisher pigs. We collected data on dispensed feed amount, water flow, drinking bouts frequency, temperature at two positions per pen, and section level humidity from 12 pens (6 double...... a set threshold a sufficient number of times, consecutively. Using this method with a 7 day prediction window, we achieved an area under the receiver operating characteristics curve of 0.84. Shorter prediction windows yielded lower performances, but longer prediction windows did not affect...
Pen Branch Delta and Savannah River Swamp Hydraulic Model
Chen, K.F.
The proposed Savannah River Site (SRS) Wetlands Restoration Project area is located in Barnwell County, South Carolina on the southwestern boundary of the SRS Reservation. The swamp covers about 40.5 km2 and is bounded to the west and south by the Savannah River and to the north and east by low bluffs at the edge of the Savannah River floodplain. Water levels within the swamp are determined by stage along the Savannah River, local drainage, groundwater seepage, and inflows from four tributaries, Beaver Dam Creek, Fourmile Branch, Pen Branch, and Steel Creek. Historic discharges of heated process water into these tributaries scoured the streambed, created deltas in the adjacent wetland, and killed native vegetation in the vicinity of the delta deposits. Future releases from these tributaries will be substantially smaller and closer to ambient temperatures. One component of the proposed restoration project will be to reestablish indigenous wetland vegetation on the Pen Branch delta that covers about 1.0 km2. Long-term predictions of water levels within the swamp are required to determine the characteristics of suitable plants. The objective of the study was to predict water levels at various locations within the proposed SRS Wetlands Restoration Project area for a range of Savannah River flows and regulated releases from Pen Branch. TABS-MD, a United States Army Corps of Engineer developed two-dimensional finite element open channel hydraulic computer code, was used to model the SRS swamp area for various flow conditions
Students' Attention when Using Touchscreens and Pen Tablets in a Mathematics Classroom
Chiung-Hui Chiu
Full Text Available Aim/Purpose: The present study investigated and compared students' attention in terms of time-on-task and number of distractors between using a touchscreen and a pen tablet in mathematical problem-solving activities with virtual manipulatives. Background: Although there is an increasing use of these input devices in educational practice, little research has focused on assessing student attention while using touchscreens or pen tablets in a mathematics classroom. Methodology: A qualitative exploration was conducted in a public elementary school in New Taipei, Taiwan. Six fifth-grade students participated in the activities. Video recordings of the activities and the students' actions were analyzed. Findings: The results showed that students in the activity using touchscreens maintained greater attention and, thus, had more time-on-task and fewer distractors than those in the activity using pen tablets. Recommendations for Practitioners: School teachers could employ touchscreens in mathematics classrooms to support activities that focus on students' manipulations in relation to the attention paid to the learning content. Recommendation for Researchers: The findings enhance our understanding of the input devices used in educational practice and provide a basis for further research. Impact on Society: The findings may also shed light on the human-technology interaction process involved in using pen and touch technology conditions. Future Research: Activities similar to those reported here should be conducted using more participants. In addition, it is important to understand how students with different levels of mathematics achievement use the devices in the activities.
Development and Evaluation of an Interactive Pen
Froilan G. Destreza
Full Text Available Technologies have reached the classroom. It is one of the means of teaching strategies nowadays. Multimedia projectors have become one of the teaching tools the teacher cannot bear without it. The concept of making this tool to be interactive and easier to use was far conceived by the researcher. The researcher's objective was to develop such tool and evaluate it according to its portability, simplicity, robustness, user-friendliness, effectiveness and efficiency. The respondents of the project were both the students and teachers of Batangas State University ARASOF-Nasugbu. The researcher has developed different prototypes for the interactive pen and tested in different environment and demonstrated the "know-how� of the project. The project was built using a simple infrared light emitting diode (IR LED, infrared tracker, and software which computes, detects and interact with the application program. Evaluation of the project followed the demonstration. The project got a high acceptance according to its portability, simplicity, robustness, user-friendliness, effectiveness and efficiency. The researcher is recommending the full implementation of the project in the Batangas State University ARASOF- Nasugbu and for better enhancement of the project by eliminating the pen.
Pen Culture of the Black-Chinned Tilapia, Sarotherodon ...
Pen-fish-culture as a culture-based fisheries approach was investigated in the Aglor Lagoon from December 2003 to June, 2004. The fish used in the study was the Black-chinned tilapia Sarotherodon melanotheron. The growth performance of S. melanotheron cultured for six months in the Aglor Lagoon under three ...
Demonstrating Kinematics and Newton's Laws in a Jump
Kamela, Martin
When students begin the study of Newton's laws they are generally comfortable with static equilibrium type problems, but dynamic examples where forces are not constant are more challenging. The class exercise presented here helps students to develop an intuitive grasp of both the position-velocity-acceleration relation and the force-acceleration…
Gamow on Newton: Another Look at Centripetal Acceleration
Corrao, Christian
Presented here is an adaptation of George Gamow's derivation of the centripetal acceleration formula as it applies to Earth's orbiting Moon. The derivation appears in Gamows short but engaging book "Gravity", first published in 1962, and is essentially a distillation of Newton's work. While "TPT" contributors have offered several insightful…
Proving Newton Right or Wrong with Blur Photography
Davidhazy, Andrew
Sir Isaac Newton determined that the acceleration constant for gravity was 32 ft./per/sec/sec. This is a fact that most students become familiar with over time and through various means. This article describes how this can be demonstrated in a technology classroom using simple photographic equipment. (Contains 5 figures.)
Self-adaptive Newton-based iteration strategy for the LES of turbulent multi-scale flows
Daude, F.; Mary, I.; Comte, P.
An improvement of the efficiency of implicit schemes based on Newton-like methods for the simulation of turbulent flows by compressible LES or DNS is proposed. It hinges on a zonal Self-Adaptive Newton method (hereafter denoted SAN), capable of taking advantage of Newton convergence rate heterogeneities in multi-scale flow configurations due to a strong spatial variation of the mesh resolution, such as transitional or turbulent flows controlled by small actuators or passive devices. Thanks to a predictor of the local Newton convergence rate, SAN provides computational savings by allocating resources in regions where they are most needed. The consistency with explicit time integration and the efficiency of the method are checked in three test cases: - The standard test-case of 2-D linear advection of a vortex, on three different two-block grids. - Transition to 3-D turbulence on the lee-side of an airfoil at high angle of attack, which features a challenging laminar separation bubble with a turbulent reattachment. - A passively-controlled turbulent transonic cavity flow, for which the CPU time is reduced by a factor of 10 with respect to the baseline algorithm, illustrates the interest of the proposed algorithm. (authors)
Personality Traits Characterized by Adjectives in a Famous Chinese Novel of the 18th Century
Junpeng Zhu
Full Text Available The personality-descriptive adjectives used in a famous Chinese novel of the 18th century, A Dream of Red Mansions, which is thought to broadly reflect Chinese culture, might help depict personality structure. Four hundred ninety-three personality-descriptive adjectives from the first 80 chapters of the novel were administered to 732 Chinese university students. After factor analyses, the one- to seven-factor solutions were extracted, and the five-factor one was relatively clearer. The five factors of personality titled Wicked, Intelligent, Amiable, Conscientious, and Frank, were intercorrelated. Men scored higher on Wicked and Conscientious but lower on Amiable compared with women. As a preliminary trial, our study demonstrates that personality-descriptive adjectives in a famous Chinese novel characterize the personality structure.
A feature based comparison of pen and swipe based signature characteristics.
Robertson, Joshua; Guest, Richard
Dynamic Signature Verification (DSV) is a biometric modality that identifies anatomical and behavioral characteristics when an individual signs their name. Conventionally signature data has been captured using pen/tablet apparatus. However, the use of other devices such as the touch-screen tablets has expanded in recent years affording the possibility of assessing biometric interaction on this new technology. To explore the potential of employing DSV techniques when a user signs or swipes with their finger, we report a study to correlate pen and finger generated features. Investigating the stability and correlation between a set of characteristic features recorded in participant's signatures and touch-based swipe gestures, a statistical analysis was conducted to assess consistency between capture scenarios. The results indicate that there is a range of static and dynamic features such as the rate of jerk, size, duration and the distance the pen traveled that can lead to interoperability between these two systems for input methods for use within a potential biometric context. It can be concluded that this data indicates that a general principle is that the same underlying constructional mechanisms are evident. Copyright © 2015 Elsevier B.V. All rights reserved.
Pen-mate directed behaviour in ad libitum fed pigs given different quantities and frequencies of straw
Williams, Charlotte Amdi; Lahrmann, H. P.; Oxholm, L. C.
Straw stimulates explorative behaviour and is therefore attractive to pigs. Further, it can be effective in reducing negative pen-mate directed behaviours. Under most commercial conditions, straw can only be used in limited amounts as it can be difficult to handle in most vacuum slurry systems...... as a control treatment, against which the other treatments (quantities T25 and T50) and frequencies of straw allocations (T2×50 and T4×25) were tested. Three focal pigs per pen were randomly chosen and observed for 15 min per hour where tail-in-mouth, ear-in-mouth, aggression and other pen-mate directed...... behaviour were recorded. In addition, residual straw in the pens was assessed using four categories ranging from straw in a thin layer; little straw; few straws; and soiled straw. Pigs were active for about 30% of the registered time, but overall no differences in total pen-mate directed behaviour (tail...
On topological modifications of Newton's law
Floratos, E.G.; Leontaris, G.K.
Recent cosmological data for very large distances challenge the validity of the standard cosmological model. Motivated by the observed spatial flatness the accelerating expansion and the various anisotropies with preferred axes in the universe we examine the consequences of the simple hypothesis that the three-dimensional space has a global R 2 × S 1 topology. We take the radius of the compactification to be the observed cosmological scale beyond which the accelerated expansion starts. We derive the induced corrections to the Newton's gravitational potential and we find that for distances smaller than the S 1 radius the leading 1/r-term is corrected by convergent power series of multipole form in the polar angle making explicit the induced anisotropy by the compactified third dimension. On the other hand, for distances larger than the compactification scale the asymptotic behavior of the potential exhibits a logarithmic dependence with exponentially small corrections. The change of Newton's force from 1/r 2 to 1/r behavior implies a weakening of the deceleration for the expanding universe. Such topologies can also be created locally by standard Newtonian axially symmetric mass distributions with periodicity along the symmetry axis. In such cases we can use our results to obtain measurable modifications of Newtonian orbits for small distances and flat rotation spectra, for large distances at the galactic level
Special relativity, electrodynamics, and general relativity from Newton to Einstein
Kogut, John B
Special Relativity, Electrodynamics and General Relativity: From Newton to Einstein, Second Edition, is intended to teach (astro)physics, astronomy, and cosmology students how to think about special and general relativity in a fundamental, but accessible, way. Designed to render any reader a "master of relativity," everything on the subject is comprehensible and derivable from first principles. The book emphasizes problem solving, contains abundant problem sets, and is conveniently organized to meet the needs of both student and instructor. Fully revised, updated and expanded second edition Includes new chapters on magnetism as a consequence of relativity and electromagnetism Contains many improved and more engaging figures Uses less algebra resulting in more efficient derivations Enlarged discussion of dynamics and the relativistic version of Newton's second law
Newton's Law: Not so Simple after All
Robertson, William C.; Gallagher, Jeremiah; Miller, William
One of the most basic concepts related to force and motion is Newton's first law, which essentially states, "An object at rest tends to remain at rest unless acted on by an unbalanced force. An object in motion in a straight line tends to remain in motion in a straight line unless acted upon by an unbalanced force." Judging by the time and space…
(D-Pen2,4 prime -125I-Phe4,D-Pen5)enkephalin: A selective high affinity radioligand for delta opioid receptors with exceptional specific activity
Knapp, R.J.; Sharma, S.D.; Toth, G.; Duong, M.T.; Fang, L.; Bogert, C.L.; Weber, S.J.; Hunt, M.; Davis, T.P.; Wamsley, J.K. (Department of Pharmacology, University of Arizona, College of Medicine, Tucson (United States))
(D-Pen2,4{prime}-125I-Phe4,D-Pen5)enkephalin ((125I)DPDPE) is a highly selective radioligand for the delta opioid receptor with a specific activity (2200 Ci/mmol) that is over 50-fold greater than that of tritium-labeled DPDPE analogs. (125I)DPDPE binds to a single site in rat brain membranes with an equilibrium dissociation constant (Kd) value of 421 {plus minus} 67 pM and a receptor density (Bmax) value of 36.4 {plus minus} 2.7 fmol/mg protein. The high affinity of this site for delta opioid receptor ligands and its low affinity for mu or kappa receptor-selective ligands are consistent with its being a delta opioid receptor. The distribution of these sites in rat brain, observed by receptor autoradiography, is also consistent with that of delta opioid receptors. Association and dissociation binding kinetics of 1.0 nM (125I) DPDPE are monophasic at 25 degrees C. The association rate (k + 1 = 5.80 {plus minus} 0.88 {times} 10(7) M-1 min-1) is about 20- and 7-fold greater than that measured for 1.0 nM (3H) DPDPE and 0.8 nM (3H) (D-Pen2,4{prime}-Cl-Phe4, D-Pen5)enkephalin, respectively. The dissociation rate of (125I)DPDPE (0.917 {plus minus} 0.117 {times} 10(-2) min-1) measured at 1.0 nM is about 3-fold faster than is observed for either of the other DPDPE analogs. The rapid binding kinetics of (125I)DPDPE is advantageous because binding equilibrium is achieved with much shorter incubation times than are required for other cyclic enkephalin analogs. This, in addition to its much higher specific activity, makes (125I)DPDPE a valuable new radioligand for studies of delta opioid receptors.
Feedlot- and Pen-Level Prevalence of Enterohemorrhagic Escherichia coli in Feces of Commercial Feedlot Cattle in Two Major U.S. Cattle Feeding Areas.
Cull, Charley A; Renter, David G; Dewsbury, Diana M; Noll, Lance W; Shridhar, Pragathi B; Ives, Samuel E; Nagaraja, Tiruvoor G; Cernicchiaro, Natalia
The objective of this study was to determine feedlot- and pen-level fecal prevalence of seven enterohemorrhagic Escherichia coli (EHEC) belonging to serogroups (O26, O45, O103, O111, O121, O145, and O157, or EHEC-7) in feces of feedlot cattle in two feeding areas in the United States. Cattle pens from four commercial feedlots in each of the two major U.S. beef cattle areas were sampled. Up to 16 pen-floor fecal samples were collected from each of 4-6 pens per feedlot, monthly, for a total of three visits per feedlot, from June to August, 2014. Culture procedures including fecal enrichment in E. coli broth, immunomagnetic separation, and plating on selective media, followed by confirmation through polymerase chain reaction (PCR) testing, were conducted. Generalized linear mixed models were fitted to estimate feedlot-, pen-, and sample-level fecal prevalence of EHEC-7 and to evaluate associations between potential demographic and management risk factors with feedlot and within-pen prevalence of EHEC-7. All study feedlots and 31.0% of the study pens had at least one non-O157 EHEC-positive fecal sample, whereas 62.4% of pens tested positive for EHEC O157; sample-level prevalence estimates ranged from 0.0% for EHEC O121 to 18.7% for EHEC O157. Within-pen prevalence of EHEC O157 varied significantly by sampling month; similarly within-pen prevalence of non-O157 EHEC varied significantly by month and by the sex composition of the pen (heifer, steer, or mixed). Feedlot management factors, however, were not significantly associated with fecal prevalence of EHEC-7. Intraclass correlation coefficients for EHEC-7 models indicated that most of the variation occurred between pens, rather than within pens, or between feedlots. Hence, the potential combination of preharvest interventions and pen-level management strategies may have positive food safety impacts downstream along the beef chain.
Poli(Etileno Naftalato - PEN: uma revisão do seu histórico e as principais tendências de sua aplicação mundial Poly(ethylene naphtalate - PEN: historical review and main trends in world application
Edilene de Cássia D. Nunes
Full Text Available Este artigo contém uma revisão sobre o Poli(etileno naftalato - PEN e também inclui vários aspectos relacionados com as blendas poliméricas Poli(etileno tereftalato - PET / Poli(etileno naftalato - PEN. O artigo é resultado de um desenvolvimento conjunto da Alcoa Alumínio S.A.- Divisão de Embalagens e do Departamento de Engenharia de Materiais - Universidade Federal de São Carlos (UFSCar, que tem como objetivo pesquisar o tema aqui abordado.This paper presents a review on poly(ethylene naphtalate - PEN including several features related to poly(ethylene terephtalate - PET / poly(ethylene naphtalate - PEN blends.The paper is a result of a conjoint development of Alcoa S.A. - Packaging Divisions and of the Department of Materials Engineering - Federal University of São Carlos (UFSCar, whose scope is to investigate the subject here approached.
Le Pen õigustas natside tegevust Prantsusmaal / Margo Pajuste
Index Scriptorium Estoniae
Pajuste, Margo
Prantsuse paremäärmusliku partei juht Jean-Marie Le Pen väitis, et Saksa okupatsioon II maailmasõja ajal Prantsusmaal ei olnud eriti ebainimlik ja et kui väited massimõrvade kohta Prantsusmaal vastaksid tõele, poleks olnud vaja luua koonduslaagreid poliitvangidele. Reageeringutest Le Peni väidete kohta Prantsusmaal
A CD with the wishes for the 21st century from thousands of readers of the science magazine "Newton", was buried at the Atlas construction site on 16.03.2000 (handling the CD: Giorgio Riviecco, Editor of "Newton")
Laurent Guiraud
Newton's 'Principia Mathematica Philosophia' and Planck's elementary constants
Rompe, R.; Treder, H.J.
Together with Planck's elementary constants Newton's principles prove a guaranteed basis of physics and 'exact' sciences of all directions. The conceptions in physics are competent at all physical problems as well as technology too. Classical physics was founded in such a way to reach far beyond the physics of macroscopic bodies. (author)
Cost minimization analysis of different growth hormone pen devices based on time-and-motion simulations
Kim Jaewhan
Full Text Available Abstract Background Numerous pen devices are available to administer recombinant Human Growth Hormone (rhGH, and both patients and health plans have varying issues to consider when selecting a particular product and device for daily use. Therefore, the present study utilized multi-dimensional product analysis to assess potential time involvement, required weekly administration steps, and utilization costs relative to daily rhGH administration. Methods Study objectives were to conduct 1 Time-and-Motion (TM simulations in a randomized block design that allowed time and steps comparisons related to rhGH preparation, administration and storage, and 2 a Cost Minimization Analysis (CMA relative to opportunity and supply costs. Nurses naïve to rhGH administration and devices were recruited to evaluate four rhGH pen devices (2 in liquid form, 2 requiring reconstitution via TM simulations. Five videotaped and timed trials for each product were evaluated based on: 1 Learning (initial use instructions, 2 Preparation (arrange device for use, 3 Administration (actual simulation manikin injection, and 4 Storage (maintain product viability between doses, in addition to assessment of steps required for weekly use. The CMA applied micro-costing techniques related to opportunity costs for caregivers (categorized as wages, non-drug medical supplies, and drug product costs. Results Norditropin® NordiFlex and Norditropin® NordiPen (NNF and NNP, Novo Nordisk, Inc., Bagsværd, Denmark took less weekly Total Time (p ® Pen (GTP, Pfizer, Inc, New York, New York or HumatroPen® (HTP, Eli Lilly and Company, Indianapolis, Indiana. Time savings were directly related to differences in new package Preparation times (NNF (1.35 minutes, NNP (2.48 minutes GTP (4.11 minutes, HTP (8.64 minutes, p Conclusions Time-and-motion simulation data used to support a micro-cost analysis demonstrated that the pen device with the greater time demand has highest net costs.
Genius Is Not Immune to Persistent Misconceptions: Conceptual Difficulties Impeding Isaac Newton and Contemporary Physics Students.
Steinberg, Melvin S.; And Others
Recent research has shown that serious misconceptions frequently survive high school and university instruction in mechanics. It is interesting to inquire whether Newton himself encountered conceptual difficulties before he wrote the "Principia." This paper compares Newton's pre-"Principia" beliefs, based upon his writings,…
Influence of Penning effect on the plasma features in a non-equilibrium atmospheric pressure plasma jet
Chang, Zhengshi; Zhang, Guanjun [School of Electrical Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Jiang, Nan; Cao, Zexian, E-mail: [email protected] [Institute of Physics, Chinese Academy of Sciences, Beijing 100190 (China)
Non-equilibrium atmospheric pressure plasma jet (APPJ) is a cold plasma source that promises various innovative applications. The influence of Penning effect on the formation, propagation, and other physical properties of the plasma bullets in APPJ remains a debatable topic. By using a 10 cm wide active electrode and a frequency of applied voltage down to 0.5 Hz, the Penning effect caused by preceding discharges can be excluded. It was found that the Penning effect originating in a preceding discharge helps build a conductive channel in the gas flow and provide seed electrons, thus the discharge can be maintained at a low voltage which in turn leads to a smaller propagation speed for the plasma bullet. Photographs from an intensified charge coupled device reveal that the annular structure of the plasma plume for He is irrelevant to the Penning ionization process arising from preceding discharges. By adding NH{sub 3} into Ar to introduce Penning effect, the originally filamentous discharge of Ar can display a rather extensive plasma plume in ambient as He. These results are helpful for the understanding of the behaviors of non-equilibrium APPJs generated under distinct conditions and for the design of plasma jet features, especially the spatial distribution and propagation speed, which are essential for application.
Fourier Transform Infrared (FTIR Spectroscopy with Chemometric Techniques for the Classification of Ballpoint Pen Inks
Muhammad Naeim Mohamad Asri
Full Text Available FTIR spectroscopic techniques have been shown to possess good abilities to analyse ballpoint pen inks. These in-situ techniques involve directing light onto ballpoint ink samples to generate an FTIR spectrum, providing "molecular fingerprints� of the ink samples thus allowing comparison by direct visual comparison. In this study, ink from blue (n=15 and red (n=15 ballpoint pens of five different brands: Kilometrico®, G-Soft®, Stabilo®, Pilot® and Faber Castell® was analysed using the FTIR technique with the objective of establishing a distinctive differentiation according to the brand. The resulting spectra were first compared and grouped manually. Due to the similarities in terms of colour and shade of the inks, distinctive differentiation could not be achieved by means of direct visual comparison. However, when the same spectral data was analysed by Principal Component Analysis (PCA software, distinctive grouping of the ballpoint pen inks was achieved. Our results demonstrate that PCA can be used objectively to investigate ballpoint pen inks of similar colour and more importantly of different brands.
Newton's Use of the Pendulum to Investigate Fluid Resistance: A Case Study and Some Implications for Teaching about the Nature of Science
Books I and III of Newton's "Principia" develop Newton's dynamical theory and show how it explains a number of celestial phenomena. Book II has received little attention from historians or educators because it does not play a major role in Newton's argument. However, it is in Book II that we see most clearly Newton both as a theoretician and an…
Decentralized Quasi-Newton Methods
Eisen, Mark; Mokhtari, Aryan; Ribeiro, Alejandro
We introduce the decentralized Broyden-Fletcher-Goldfarb-Shanno (D-BFGS) method as a variation of the BFGS quasi-Newton method for solving decentralized optimization problems. The D-BFGS method is of interest in problems that are not well conditioned, making first order decentralized methods ineffective, and in which second order information is not readily available, making second order decentralized methods impossible. D-BFGS is a fully distributed algorithm in which nodes approximate curvature information of themselves and their neighbors through the satisfaction of a secant condition. We additionally provide a formulation of the algorithm in asynchronous settings. Convergence of D-BFGS is established formally in both the synchronous and asynchronous settings and strong performance advantages relative to first order methods are shown numerically.
CRESU studies of electron attachment and Penning ionization at temperatures down to 48 K
Le Garrec, J.L.; Mitchell, J.B.A.; Rowe, B.R.
The object of the present report is to present results obtained for electron attachment and Penning ionization, obtained with the addition of a Langmuir probe to the measurement apparatus. Measurements of the rate coefficients for electron attachment and Penning ionization are performed using the standard techniques for the flow reactors. Rate coefficients for the Penning ionisation of argon, nitrogen molecule, oxygen molecule by helium metastable are presented. The results obtained concerning the attachment of electrons to SF 6 (non-dissociative), CCl 2 F 2 (producing Cl - ), and CF 3 Br as a function of temperature are presented. The differences observed, and the variation of the value of β below 100 K, provides evidence of the strong influence of the internal state (probably vibrational) of the CF 3 Br molecule on its dissociative attachment
Female body as a fetish in Helmut Newton's photography
Pantović Katarina
Full Text Available The paper illuminates some of the principles by which Helmut Newton's photographic poetics functions. It is examined from the perspectives of recent art history, feminist critique and psychoanalytic theory. His photographs came to a standstill not far from pornography, yet they stayed within the jet-set community, reflecting at the same time the sexual revolution in the 60s and 70s of the twentieth century and the rising of the fashion and film industries and other Western emancipatory movements. Newton's obscure photojournalism provoked conventions, presenting the female body as a fetish and object of erotic pleasure, affirming, nonetheless, a new feminine self-consciousness and freedom. Thus, he constituted modern eroticism by connecting fetishism, voyeurism and sadomasochism, creating a provocative hybrid photography that embraced fashion, eroticism and portrait, hence documenting, in highly stylistic manner, the decadency and eccentricity of the lifestyle of the rich.
Stabilized quasi-Newton optimization of noisy potential energy surfaces
Schaefer, Bastian; Goedecker, Stefan, E-mail: [email protected] [Department of Physics, University of Basel, Klingelbergstrasse 82, CH-4056 Basel (Switzerland); Alireza Ghasemi, S. [Institute for Advanced Studies in Basic Sciences, P.O. Box 45195-1159, IR-Zanjan (Iran, Islamic Republic of); Roy, Shantanu [Computational and Systems Biology, Biozentrum, University of Basel, CH-4056 Basel (Switzerland)
Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods.
On Newton's third law and its symmetry-breaking effects
Pinheiro, Mario J
The law of action-reaction, considered by Ernst Mach as the cornerstone of physics, is thoroughly used to derive the conservation laws of linear and angular momentum. However, the conflict between momentum conservation law and Newton's third law, on experimental and theoretical grounds, calls for more attention. We give a background survey of several questions raised by the action-reaction law and, in particular, the role of the physical vacuum is shown to provide an appropriate framework for clarifying the occurrence of possible violations of the action-reaction law. Then, in the framework of statistical mechanics, using a maximizing entropy procedure, we obtain an expression for the general linear momentum of a body particle. The new approach presented here shows that Newton's third law is not verified in systems out of equilibrium due to an additional entropic gradient term present in the particle's momentum.
Schaefer, Bastian; Goedecker, Stefan; Alireza Ghasemi, S.; Roy, Shantanu
Optimizations of atomic positions belong to the most commonly performed tasks in electronic structure calculations. Many simulations like global minimum searches or characterizations of chemical reactions require performing hundreds or thousands of minimizations or saddle computations. To automatize these tasks, optimization algorithms must not only be efficient but also very reliable. Unfortunately, computational noise in forces and energies is inherent to electronic structure codes. This computational noise poses a severe problem to the stability of efficient optimization methods like the limited-memory Broyden–Fletcher–Goldfarb–Shanno algorithm. We here present a technique that allows obtaining significant curvature information of noisy potential energy surfaces. We use this technique to construct both, a stabilized quasi-Newton minimization method and a stabilized quasi-Newton saddle finding approach. We demonstrate with the help of benchmarks that both the minimizer and the saddle finding approach are superior to comparable existing methods
Newton's second law in a non-commutative space
Romero, Juan M.; Santiago, J.A.; Vergara, J. David
In this Letter we show that corrections to Newton's second law appear if we assume a symplectic structure consistent with the commutation rules of the non-commutative quantum mechanics. For central field we find that the correction term breaks the rotational symmetry. For the Kepler problem, this term is similar to a Coriolis force
Noble-gas ionization in the ion source with Penning effect
Monchka, D.; Lyatushinskij, A.; Vasyak, A.
By additional use of that the ion source efficiency can be increased the Penning ionization. The results of estimates of certain coefficients for the processes taking place in the plasma ion sources are presented
Has ESA's XMM-Newton cast doubt over dark energy?
Galaxy cluster RXJ0847 hi-res Size hi-res: 100k Galaxy cluster RXJ0847 The fuzzy object at the centre of the frame is one of the galaxy clusters observed by XMM-Newton in its investigation of the distant Universe. The cluster, designated RXJ0847.2+3449, is about 7 000 million light years away, so we see it here as it was 7 000 million years ago, when the Universe was only about half of its present age. This cluster is made up of several dozen galaxies. Observations of eight distant clusters of galaxies, the furthest of which is around 10 thousand million light years away, were studied by an international group of astronomers led by David Lumb of ESA's Space Research and Technology Centre (ESTEC) in the Netherlands. They compared these clusters to those found in the nearby Universe. This study was conducted as part of the larger XMM-Newton Omega Project, which investigates the density of matter in the Universe under the lead of Jim Bartlett of the College de France. Clusters of galaxies are prodigious emitters of X-rays because they contain a large quantity of high-temperature gas. This gas surrounds galaxies in the same way as steam surrounds people in a sauna. By measuring the quantity and energy of X-rays from a cluster, astronomers can work out both the temperature of the cluster gas and also the mass of the cluster. Theoretically, in a Universe where the density of matter is high, clusters of galaxies would continue to grow with time and so, on average, should contain more mass now than in the past. Most astronomers believe that we live in a low-density Universe in which a mysterious substance known as 'dark energy' accounts for 70% of the content of the cosmos and, therefore, pervades everything. In this scenario, clusters of galaxies should stop growing early in the history of the Universe and look virtually indistinguishable from those of today. In a paper soon to be published by the European journal Astronomy and Astrophysics, astronomers from the XMM-Newton
When Newton's cooling law doesn't hold
Tarnow, E.
What is the fastest way to cool something? If the object is macroscopic it is to lower the surrounding temperature as much as possible and let Newton's cooling law take effect. If we enter the microscopic world where quantum mechanics rules, this procedure may no longer be the best. This is shown in a simple example where we calculate the optimum cooling rate for an asymmetric two-state system
A new pen device for injection of recombinant human growth hormone: a convenience, functionality and usability evaluation study
Sauer M
Full Text Available Maritta Sauer,1 Carole Abbotts2 1Global Strategic Insights, Merck KGaA, Darmstadt, Germany; 2Pharmaceutical Market Research Consultant, London, UK Purpose: Adherence to recombinant human growth hormone (r-hGH is critical to growth and other outcomes in patients with growth disorders, but the requirement for daily injections means that ease of use is an important factor. This study assessed the perceived ease of use and functionality of the prototype of a reusable pen injector (pen device for r-hGH that incorporates several advanced features. Participants and methods: Semi-structured 60-minute qualitative interviews were conducted in 5 countries with 57 health care professionals (HCPs and 30 patients with GH deficiency/caregivers administering r-hGH to patients, including children. HCPs had to be responsible for training in the use of r-hGH pen devices and to see ≥4 r-hGH patients/caregivers per month. Patients/caregivers had to have experience with r-hGH administration for at least 6 months.Results: Thirty-seven (65% of HCPs described the pen device as "simple� or "easy� to use. The aluminum body was generally perceived as attractive, high quality and comfortable to hold and operate. The ease of preparation and use made it suitable for both children and adults. The ability to dial back the r-hGH dose, if entered incorrectly, was mentioned as a major benefit, because other devices need several user steps to reset. Patients/caregivers felt the pen device was easy to use and the injection-feedback features reassured them that the full dose had been given. Overall, 40 (70% HCPs and 16 (52% patients/caregivers were likely to recommend or request the pen device. Moreover, patients/caregivers rated the pen device higher than their current reusable pens and almost equal to the leading disposable device for ease of learning, preparation, administration and ease of use.Conclusion: The prototype pen device successfully met its design
Aspiration of a perforated pen cap: complete tracheal obstruction ...
Foreign body aspiration is a common but underestimated event in children with potentially fatal outcome. Because of unreliable histories and inconsistent clinical and radiologic findings, diagnosis and treatment can represent a challenge. Inhaled pen caps predispose for complete airway obstruction and are difficult to ...
Famous people and genetic disorders: from monarchs to geniuses--a portrait of their genetic illnesses.
Ho, Nicola C; Park, Susan S; Maragh, Kevin D; Gutter, Emily M
Famous people with genetic disorders have always been a subject of interest because such news feeds the curiosity the public has for celebrities. It gives further insight into their lives and provides a medical basis for any unexplained or idiosyncratic feature or behavior they exhibit. It draws admiration from society of those who excel in their specialized fields despite the impositions of their genetic illnesses and also elicits sympathy even in the most casual observer. Such news certainly catapults a rare genetic disorder into the realm of public awareness. We hereby present six famous figures: King George III, Toulouse-Lautrec, Queen Victoria, Nicolo Paganini, Abraham Lincoln, and Vincent van Gogh, all of whom made a huge indelible mark in either the history of politics or that of the arts. Copyright 2003 Wiley-Liss, Inc.
Art therapy using famous painting appreciation maintains fatigue levels during radiotherapy in cancer patients
Koom, Woong Sub; Lee, Jeong Shin; Kim, Yong Bae; Choi, Mi Yeon; Park, Eun Jung; Kim, Ju Hye; Kim, Sun Hyun
The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy
Koom, Woong Sub; Lee, Jeong Shin; Kim, Yong Bae [Dept. of Radiation Oncology, Yonsei University College of Medicine, Seoul (Korea, Republic of); Choi, Mi Yeon; Park, Eun Jung; Kim, Ju Hye; Kim, Sun Hyun [Graduate School of Clinical Art Therapy, CHA University, Pocheon (Korea, Republic of)
The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy.
Newton's laws through a science adventure
Šuštar, Sara
The main purpose of my diploma thesis is to create a scientific adventure based on the Newton's laws. My aim has been to introduce this topic to the kids in elementary school as well as the general public. That is why the adventure will take place in the House of Experiments. The first part is dedicated to theory and various experiments, which lead to deeper understanding of the laws. I implemented experiments on rollerblades, such as free movement, movement with the help of springs which wer...
Precise mass measurements of exotic nuclei--the SHIPTRAP Penning trap mass spectrometer
Herfurth, F.; Ackermann, D.; Block, M.; Dworschak, M.; Eliseev, S.; Hessberger, F.; Hofmann, S.; Kluge, H.-J.; Maero, G.; Martin, A.; Mazzocco, M.; Rauth, C.; Vorobjev, G.; Blaum, K.; Ferrer, R.; Neidherr, D.; Chaudhuri, A.; Marx, G.; Schweikhard, L.; Neumayr, J.
The SHIPTRAP Penning trap mass spectrometer has been designed and constructed to measure the mass of short-lived, radioactive nuclei. The radioactive nuclei are produced in fusion-evaporation reactions and separated in flight with the velocity filter SHIP at GSI in Darmstadt. They are captured in a gas cell and transfered to a double Penning trap mass spectrometer. There, the cyclotron frequencies of the radioactive ions are determined and yield mass values with uncertainties ≥4.5·10 -8 . More than 50 nuclei have been investigated so far with the present overall efficiency of about 0.5 to 2%
Field-scale evaluation of water fluxes and manure solution leaching in feedlot pen soils.
García, Ana R; Maisonnave, Roberto; Massobrio, Marcelo J; Fabrizio de Iorio, Alicia R
Accumulation of beef cattle manure on feedlot pen surfaces generates large amounts of dissolved solutes that can be mobilized by water fluxes, affecting surface and groundwater quality. Our objective was to examine the long-term impacts of a beef cattle feeding operation on water fluxes and manure leaching in feedlot pens located on sandy loam soils of the subhumid Sandy Pampa region in Argentina. Bulk density, gravimetric moisture content, and chloride concentration were quantified. Rain simulation trials were performed to estimate infiltration and runoff rates. Using chloride ion as a tracer, profile analysis techniques were applied to estimate the soil moisture flux and manure conservative chemical components leaching rates. An organic stratum was found over the surface of the pen soil, separated from the underlying soil by a highly compacted thin layer (the manure-soil interface). The soil beneath the organic layer showed greater bulk density in the A horizon than in the control soil and had greater moisture content. Greater concentrations of chloride were found as a consequence of the partial sealing of the manure-soil interface. Surface runoff was the dominant process in the feedlot pen soil, whereas infiltration was the main process in control soil. Soil moisture flux beneath pens decreased substantially after 15 yr of activity. The estimated minimum leaching rate of chloride was 13 times faster than the estimated soil moisture flux. This difference suggests that chloride ions are not exclusively transported by advective flow under our conditions but also by solute diffusion and preferential flow. Copyright © by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America, Inc.
Observations of MCG-5-23-16 with Suzaku, XMM-Newton and Nustar
Zoghbi, A.; Cackett, E. M.; Reynolds, C.
MCG-5-23-16 is one of the first active galactic nuclei (AGNs) where relativistic reverberation in the iron K line originating in the vicinity of the supermassive black hole was found, based on a short XMM-Newton observation. In this work, we present the results from long X-ray observations using...... Suzaku, XMM-Newton, and NuSTAR designed to map the emission region using X-ray reverberation. A relativistic iron line is detected in the lag spectra on three different timescales, allowing the emission from different regions around the black hole to be separated. Using NuSTAR coverage of energies above...
Shukri Klinaku
Full Text Available Using the general formula for the Doppler effect at any arbitrary angle, the three famous experiments for special theory of relativity will be examined. Explanation of the experiments of Michelson, Kennedy–Thorndike and Ives–Stilwell will be given in a precise and elegant way without postulates, arbitrary assumptions or approximations. Keywords: Doppler effect, Michelson experiment, Kennedy–Thorndike experiment, Ives–Stilwell experiment
Listening in the Silences for Fred Newton Scott
Mastrangelo, Lisa
As part of her recent sabbatical, the author proposed going to the University of Michigan Bentley Archives to do research on Fred Newton Scott, founder and chair of the Department of Rhetoric and teacher from 1889 to 1926 at the University of Michigan. Scott ran the only graduate program in rhetoric and composition in the country between those…
Dramatic (and Simple!) Demonstration of Newton's Third Law
Feldman, Gerald
An operational understanding of Newton's third law is often elusive for students. Typical examples of this concept are given for contact forces that are closer to the students' everyday experience. While this is a good thing in general, the reaction force can sometimes be taken for granted, and the students can miss the opportunity to really think…
Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.
Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne
To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.
Pervasive liquid metal based direct writing electronics with roller-ball pen
Yi Zheng
Full Text Available A roller-ball pen enabled direct writing electronics via room temperature liquid metal ink was proposed. With the rolling to print mechanism, the metallic inks were smoothly written on flexible polymer substrate to form conductive tracks and electronic devices. The contact angle analyzer and scanning electron microscope were implemented to disclose several unique inner properties of the obtained electronics. An ever high writing resolution with line width and thickness as 200 μm and 80 μm, respectively was realized. Further, with the administration of external writing pressure, GaIn24.5 droplets embody increasing wettability on polymer which demonstrates the pervasive adaptability of the roller-ball pen electronics.
Properties of H- and D- beams from magnetron and Penning sources
Sluyters, T.; Kovarik, V.
The quality of negative hydrogen isotope beams are evaluated after extraction from magnetron and Penning sources. The general conclusions of these measurements are that: (a) the beam quality from these plasma sources are adequate for the transport of high current negative ion beams in bending magnets; (b) there is evidence of practically complete space charge neutralization in the drift space beyond the extractor; (c) the beam performance from the Penning source appears to be better compared with the magnetron source; and (d) it is likely that the high electric field gradient and a concave ion emission boundary are responsible for a beam cross-over near the anode aperture, which causes beam divergence practically independent of the extraction geometry
The flight of Newton's cannonball
Pesnell, W. Dean
Newton's Cannon is a thought experiment used to motivate orbital motion. Cannonballs were fired from a high mountain at increasing muzzle velocity until they orbit the Earth. We will use the trajectories of these cannonballs to describe the shape of orbital tunnels that allow a cannonball fired from a high mountain to pass through the Earth. A sphere of constant density is used as the model of the Earth to take advantage of the analytic solutions for the interior trajectories that exist for that model. For the example shown, the cannonball trajectories that pass through the Earth intersect near the antipodal point of the cannon.
Development of compact size penning ion source for compact neutron generator.
Das, Basanta Kumar; Shyam, Anurag
For long-life operation, easy to mount and compact in size penning type ion sources are widely used in different fields of research such as neutron generators, material research, and surface etching. One penning type ion source has been developed in our laboratory. Applying high voltage of 2 kV between two oppositely biased electrodes and using permanent magnet of 500 gauss magnetic field along the axis, we had produced the glow discharge in the plasma region. The performance of this source was investigated using nitrogen gas. Deuterium ions were produced and extracted on the basis of chosen electrodes and the angle of extraction. Using a single aperture plasma electrode, the beam was extracted along the axial direction. The geometry of plasma electrode is an important factor for the efficient extraction of the ions from the plasma ion source. The extracted ion current depends upon the shape of the plasma meniscus. A concave shaped plasma meniscus produces converged ion beam. The convergence of extracted ions is related to the extraction electrode angle. The greater the angle, the more the beam converges. We had studied experimentally this effect with a compact size penning ion source. The detailed comparison among the different extraction geometry and different electrode angle are discussed in this paper.
Keywords: children, flexible bronchoscopy, Fogarty catheter, foreign body aspiration, pen cap, rigid ... plastic foreign body with central perforation occluding the trachea in supracarinal ... venous steroids and was discharged home on postoperative day 1 without ... history may not be as reliable, if not witnessed by an adult.
Illustrating Newton's Second Law with the Automobile Coast-Down Test.
Bryan, Ronald A.; And Others
Describes a run test of automobiles for applying Newton's second law of motion and the concept of power. Explains some automobile thought-experiments and provides the method and data of an actual coast-down test. (YP)
Individual and pen-based oral fluid sampling: A welfare-friendly sampling method for group-housed gestating sows.
Pol, Françoise; Dorenlor, Virginie; Eono, Florent; Eudier, Solveig; Eveno, Eric; Liégard-Vanhecke, Dorine; Rose, Nicolas; Fablet, Christelle
The aims of this study were to assess the feasibility of individual and pen-based oral fluid sampling (OFS) in 35 pig herds with group-housed sows, compare these methods to blood sampling, and assess the factors influencing the success of sampling. Individual samples were collected from at least 30 sows per herd. Pen-based OFS was performed using devices placed in at least three pens for 45min. Information related to the farm, the sows, and their living conditions were collected. Factors significantly associated with the duration of sampling and the chewing behaviour of sows were identified by logistic regression. Individual OFS took 2min 42s on average; the type of floor, swab size, and operator were associated with a sampling time >2min. Pen-based OFS was obtained from 112 devices (62.2%). The type of floor, parity, pen-level activity, and type of feeding were associated with chewing behaviour. Pen activity was associated with the latency to interact with the device. The type of floor, gestation stage, parity, group size, and latency to interact with the device were associated with a chewing time >10min. After 15, 30 and 45min of pen-based OFS, 48%, 60% and 65% of the sows were lying down, respectively. The time spent after the beginning of sampling, genetic type, and time elapsed since the last meal were associated with 50% of the sows lying down at one time point. The mean time to blood sample the sows was 1min 16s and 2min 52s if the number of operators required was considered in the sampling time estimation. The genetic type, parity, and type of floor were significantly associated with a sampling time higher than 1min 30s. This study shows that individual OFS is easy to perform in group-housed sows by a single operator, even though straw-bedded animals take longer to sample than animals housed on slatted floors, and suggests some guidelines to optimise pen-based OFS success. Copyright © 2017 Elsevier B.V. All rights reserved.
Extension of Newton's Dynamical Spectral Shift for Photons in ...
Extension of Newton's Dynamical Spectral Shift for Photons in Gravitational Fields of Static Homogeneous Spherical Massive Bodies. ... is perfectly in agreement with the physical fact that gravitational scalar potential is negative and increase in recession leads to decrease in kinetic energy and hence decrease in frequency.
Detecting yocto (10-24) newton forces with trapped ions
Uys, H
Full Text Available This article reports on a calibrated measurement of 174 Yoctonewton using a cloud of 60 9 Be+ ion confined in a Penning ion trap. These measurements suggest that ion traps may form the basis of a new class of ultrasensitive deployable force sensors....
Quadrupole deflector of the double Penning trap system MLLTRAP
Gartzke, Eva; Kolhinen, Veli; Habs, Dietrich; Neumayr, Juergen; Schuermann, Christian; Szerypo, Jerzy; Thirolf, Peter [Fakultaet fuer Physik, LMU Muenchen, Garching (Germany); Maier-Leibnitz Laboratory, Garching (Germany)
A cylindrical double Penning trap has been installed and successfully commissioned at the Maier-Leibnitz Laboratory in Garching. This trap system has been designed to isobarically purify low energy ion beams and perform highly accurate mass measurements. An electrostatic quadrupole deflector has been designed and installed at the injection line of the Penning trap system enabling a simultaneous use of an online ion beam with reference ions from an offline ion source. Alternatively two offline sources can be used concurrently e.g. an {alpha} recoil sources providing heavy radioactive species (e.g {sup 240}U) together with reference mass ions (which in the future will be e.g. a carbon cluster ion source). The bender has been designed for beam energies up to 1 keV with q/A ratios 1/1-1/250. This presentation shows the technical design and the operating parameters of the quadrupole beam bender and its implementation at the MLLTRAP system.
About the measurements systems with pen and thermoluminescent dosemeters
Cortes I, M.E.; Ramirez G, F.P.
In this work it is presented dosimetric data obtained with pen and thermoluminescent dosemeters, which are used by the Occupational Exposure Personnel (OEP) of the Mexican Petroleum Institute (IMP)(1). It was marked several great characteristics as for example, the differences among units which use one and another dosemeter type. Likewise, it is given to know diverse problems that were had in the IMP at relating the data obtained with these dosemeters (which utilizes OEP) and the ICRP 60 recommendations 1990. One of the most important difficulties is to satisfy the recommended limits by ICRP, particularly those that are referring to the units and their complex calculations. With respect to the unities, the ICRP makes reference at the concepts 'dose equivalent' and 'effective dose' with the sievert unit, that the General Regulations for Radiological Safety associates with 'dose equivalent' and 'effective dose equivalent'. It was illustrated the type of dosimetric statistics which are obtained with the TLD lectures and a OEP pen dosemeter during 1997. (Author)
Second Generation Electronic Nicotine Delivery System Vape Pen Exposure Generalizes as a Smoking Cue.
King, Andrea C; Smith, Lia J; McNamara, Patrick J; Cao, Dingcai
Second generation electronic nicotine delivery systems (ENDS; also known as e-cigarettes, vaporizers or vape pens) are designed for a customized nicotine delivery experience and have less resemblance to regular cigarettes than first generation "cigalikes." The present study examined whether they generalize as a conditioned cue and evoke smoking urges or behavior in persons exposed to their use. Data were analyzed in N = 108 young adult smokers (≥5 cigarettes per week) randomized to either a traditional combustible cigarette smoking cue or a second generation ENDS vaping cue in a controlled laboratory setting. Cigarette and e-cigarette urge and desire were assessed pre- and post-cue exposure. Smoking behavior was also explored in a subsample undergoing a smoking latency phase after cue exposure (N = 26). The ENDS vape pen cue evoked both urge and desire for a regular cigarette to a similar extent as that produced by the combustible cigarette cue. Both cues produced similar time to initiate smoking during the smoking latency phase. The ENDS vape pen cue elicited smoking urge and desire regardless of ENDS use history, that is, across ENDS naїve, lifetime or current users. Inclusion of past ENDS or cigarette use as covariates did not significantly alter the results. These findings demonstrate that observation of vape pen ENDS use generalizes as a conditioned cue to produce smoking urge, desire, and behavior in young adult smokers. As the popularity of these devices may eventually overtake those of first generation ENDS cigalikes, exposure effects will be of increasing importance. This study shows that passive exposure to a second generation ENDS vape pen cue evoked smoking urge, desire, and behavior across a range of daily and non-daily young adult smokers. Smoking urge and desire increases after vape pen exposure were similar to those produced by exposure to a first generation ENDS cigalike and a combustible cigarette, a known potent cue. Given the increasing
Medium-resolution isaac newton telescope library of empirical spectra
Sanchez-Blazquez, P.; Peletier, R. F.; Jimenez-Vicente, J.; Cardiel, N.; Cenarro, A. J.; Falcon-Barroso, J.; Gorgas, J.; Selam, S.; Vazdekis, A.
A new stellar library developed for stellar population synthesis modelling is presented. The library consists of 985 stars spanning a large range in atmospheric parameters. The spectra were obtained at the 2.5-m Isaac Newton Telescope and cover the range lambda lambda 3525-7500 angstrom at 2.3
Study of argon-based Penning gas mixtures for use in proportional counters
Agrawal, P.C.; Ramsey, B.D.; Weisskopf, M.C.
Results from an experimental investigation of three Penning gas mixtures, namely argon-acetylene (Ar-C 2 H 2 ), argon-xenon (Ar-Xe) and argon-xenon-trimethylamine (Ar-Xe-TMA), are reported. The measurements, carried out in cylindrical geometry as well as parallel plate geometry detectors, demonstrate that the Ar-C 2 H 2 mixtures show a significant Penning effect even at an acetylene concentration of 10% and provide the best energy resolution among all the argon-based gas mixtures (≤13% FWHM at 5.9 keV and 6.7% at 22.2 keV). In the parallel plate detector the Ar-C 2 H 2 fillings provide a resolution of ≅7% FWHM at 22.2 keV up to a gas gain of at least ≅10 4 . The nonmetastable Penning mixture Ar-Xe provides the highest gas gain among all the argon-based gas mixtures and is well suited for use in long-duration space-based experiments. Best results are obtained with 5% and 20% Xe in Ar, the energy resolution being ≅7% FWHM at 22.2 keV and ≅4.5% at 59.6 keV for gas gain 3 . Addition of ≥1% TMA to an 80% Ar-20% Xe mixture produces a dramatic increase in gas gain but the energy resolution remains unaffected (≅7% FWHM at 22.2 keV). This increase in gas gain is attributed to the occurrence of a Penning effect between Xe and TMA, the ionization potential of TMA being 8.3 eV, just below the xenon metastable potential of 8.39 eV. (orig.)
Temperature and body weight affect fouling of pig pens.
Aarnink, A J A; Schrama, J W; Heetkamp, M J W; Stefanowska, J; Huynh, T T T
Fouling of the solid lying area in pig housing is undesirable for reasons of animal welfare, animal health, environmental pollution, and labor costs. In this study the influence of temperature on the excreting and lying behavior of growing-finishing pigs of different BW (25, 45, 65, 85, or 105 kg) was studied. Ten groups of 5 pigs were placed in partially slatted pens (60% solid concrete, 40% metal-slatted) in climate respiration chambers. After an adaptation period, temperatures were raised daily for 9 d. Results showed that above certain inflection temperatures (IT; mean 22.6 degrees C, SE = 0.78) the number of excretions (relative to the total number of excretions) on the solid floor increased with temperature (mean increase 9.7%/ degrees C, SE = 1.41). Below the IT, the number of excretions on the solid floor was low and not influenced by temperature (mean 13.2%, SE = 3.5). On average, the IT for excretion on the solid floor decreased with increasing BW, from approximately 25 degrees C at 25 kg to 20 degrees C at 100 kg of BW (P temperature also affected the pattern and postural lying. The temperature at which a maximum number of pigs lay on the slatted floor (i.e., the IT for lying) decreased from approximately 27 degrees C at 25 kg to 23 degrees C at 100 kg of BW (P temperatures, pigs lay more on their sides and less against other pigs (P Temperature affects lying and excreting behavior of growing-finishing pigs in partially slatted pens. Above certain IT, pen fouling increases linearly with temperature. Inflection temperatures decrease at increasing BW.
Dark Matter Search Using XMM-Newton Observations of Willman 1
Lowenstein, Michael; Kusenko, Alexander
We report the results of a search for an emission line from radiatively decaying dark matter in the ultra-faint dwarf spheroidal galaxy Willman 1 based on analysis of spectra extracted from XMM-Newton X-ray Observatory data. The observation follows up our analysis of Chandra data of Willman 1that resulted in line flux upper limits over the Chandra bandpass and evidence of a 2.5 keY feature at a significance below the 99% confidence threshold used to define the limits. The higher effective area of the XMM-Newton detectors, combined with application of recently developing methods for extended-source analysis, allow us to derive improved constraints on the combination of mass and mixing angle of the sterile neutrino dark matter candidate. We do not confirm the Chandra evidence for a 2.5 keV emission line.
Efficient management of high level XMM-Newton science data products
Zolotukhin, Ivan
Like it is the case for many large projects, XMM-Newton data have been used by the community to produce many valuable higher level data products. However, even after 15 years of the successful mission operation, the potential of these data is not yet fully uncovered, mostly due to the logistical and data management issues. We present a web application, http://xmm-catalog.irap.omp.eu, to highlight an idea that existing public high level data collections generate significant added research value when organized and exposed properly. Several application features such as access to the all-time XMM-Newton photon database and online fitting of extracted sources spectra were never available before. In this talk we share best practices we worked out during the development of this website and discuss their potential use for other large projects generating astrophysical data.
Preparation and characterisation of irradiated crab chitosan and New Zealand Arrow squid pen chitosan
Shavandi, Amin; Bekhit, Adnan A.; Bekhit, Alaa El-Din A.; Sun, Zhifa; Ali, M. Azam
The properties of chitosan from Arrow squid (Nototodarus sloanii) pen (CHS) and commercial crab shell (CHC) were investigated using FTIR, DSC, SEM and XRD before and after irradiation at the dose of 28 kGy in the presence or absence of 5% water. Also, the viscosity, deacetylation degree, water and oil holding capacities, colour and antimicrobial activities of the chitosan samples were determined. Irradiation decreased (P pen chitosan was whiter in colour (White Index = 90.06%) compared to CHC (White Index = 83.70%). Generally, the CHC samples (control and irradiated) exhibited better antibacterial activity compared to CHS, but the opposite was observed with antifungal activity. - Highlights: • Chitosan prepared from Arrow squid pens (Nototodarus sloanii). • Chitosan samples were gamma irradiated at 28 kGy. • Squid pen chitosan showed high fat and water uptake capacities compared to crab shell chitosan. • Gamma irradiation enhanced the DDA of squid pen chitosan but not crab shell chitosan.
ESA's XMM-Newton gains deep insights into the distant Universe
First image from the XMM-LSS survey hi-res Size hi-res: 87 kb Credits: ESA First image from the XMM-LSS survey The first image from the XMM-LSS survey is actually a combination of fourteen separate 'pointings' of the space observatory. It represents a region of the sky eight times larger than the full Moon and contains around 25 clusters. The circles represent the sources previously known from the 1991 ROSAT All-Sky Survey. A computer programme zooms in on an interesting region hi-res Size hi-res: 86 kb Credits: ESA A computer programme zooms in on an interesting region A computer programme zooms in on an interesting region of the image and identifies the possible cluster. Each point on this graph represents a single X-ray photons detected by XMM-Newton. Most come from distant actie galaxies and the computer must perform a sophisticated, statistical computation to determine which X-ray come from clusters. Contour map of clusters hi-res Size hi-res: 139 kb Credits: ESA Contour map of clusters The computer programme transforms the XMM-Newton data into a contour map of the cluster's probable extent and superimposes it over the CFHT snapshot, allowing the individual galaxies in the cluster to be targeted for further observations with ESO's VLT, to measure its distance and locate the cluster in the universe. Unlike grains of sand on a beach, matter is not uniformly spread throughout the Universe. Instead, it is concentrated into galaxies like our own which themselves congregate into clusters. These clusters are 'strung' throughout the Universe in a web-like structure. Astronomers have studied this large-scale structure of the nearby Universe but have lacked the instruments to extend the search to the large volumes of the distant Universe. Thanks to its unrivalled sensitivity, in less than three hours, ESA's X-ray observatory XMM-Newton can see back about 7000 million years to a cosmological era when the Universe was about half its present size, and clusters of galaxies
Modified Newton-Raphson GRAPE methods for optimal control of spin systems
Goodwin, D. L.; Kuprov, Ilya
Quadratic convergence throughout the active space is achieved for the gradient ascent pulse engineering (GRAPE) family of quantum optimal control algorithms. We demonstrate in this communication that the Hessian of the GRAPE fidelity functional is unusually cheap, having the same asymptotic complexity scaling as the functional itself. This leads to the possibility of using very efficient numerical optimization techniques. In particular, the Newton-Raphson method with a rational function optimization (RFO) regularized Hessian is shown in this work to require fewer system trajectory evaluations than any other algorithm in the GRAPE family. This communication describes algebraic and numerical implementation aspects (matrix exponential recycling, Hessian regularization, etc.) for the RFO Newton-Raphson version of GRAPE and reports benchmarks for common spin state control problems in magnetic resonance spectroscopy.
Goodwin, D. L.; Kuprov, Ilya, E-mail: [email protected] [School of Chemistry, University of Southampton, Highfield Campus, Southampton SO17 1BJ (United Kingdom)
Koom, Woong Sub; Choi, Mi Yeon; Lee, Jeongshim; Park, Eun Jung; Kim, Ju Hye; Kim, Sun-Hyun; Kim, Yong Bae
Purpose: The purpose of this study was to evaluate the efficacy of art therapy to control fatigue in cancer patients during course of radiotherapy and its impact on quality of life (QoL). Materials and Methods: Fifty cancer patients receiving radiotherapy received weekly art therapy sessions using famous painting appreciation. Fatigue and QoL were assessed using the Brief Fatigue Inventory (BFI) Scale and the Functional Assessment of Chronic Illness Therapy-Fatigue (FACIT-F) at baseline before starting radiotherapy, every week for 4 weeks during radiotherapy, and at the end of radiotherapy. Mean changes of scores over time were analyzed using a generalized linear mixed model. Results: Of the 50 patients, 34 (68%) participated in 4 sessions of art therapy. Generalized linear mixed models testing for the effect of time on mean score changes showed no significant changes in scores from baseline for the BFI and FACIT-F. The mean BFI score and FACIT-F total score changed from 3.1 to 2.7 and from 110.7 to 109.2, respectively. Art therapy based on the appreciation of famous paintings led to increases in self-esteem by increasing self-realization and forming social relationships. Conclusion: Fatigue and QoL in cancer patients with art therapy do not deteriorate during a period of radiotherapy. Despite the single-arm small number of participants and pilot design, this study provides a strong initial demonstration that art therapy of appreciation for famous painting is worthy of further study for fatigue and QoL improvement. Further, it can play an important role in routine practice in cancer patients during radiotherapy. PMID:27306778
Gradual training of alpacas to the confinement of metabolism pens reduces stress when normal excretion behavior is accommodated.
Lund, Kirrin E; Maloney, Shane K; Milton, John T B; Blache, Dominique
Confinement in metabolism pens may provoke a stress response in alpacas that will reduce the welfare of the animal and jeopardize the validity of scientific results obtained in such pens. In this study, we tested a protocol designed to successfully train alpacas to be held in a specially designed metabolism pen so that the animals' confinement would not jeopardize their welfare. We hypothesized that the alpacas would show fewer behaviors associated with a response to stress as training gradually progressed, and that they would adapt to being in the confinement of the metabolism pen. The training protocol was successful at introducing alpacas to the metabolism pens, and it did reduce the incidence of behavioral responses to stress as the training progressed. The success of the training protocol may be attributed to the progressive nature of the training, the tailoring of the protocol to suit alpacas, and the use of positive reinforcement. This study demonstrated that both animal welfare and the validity of the scientific outcomes could be maximized by the gradual training of experimental animals, thereby minimizing the stress imposed on the animals during experimental procedures.
Comparison of bacterial culture and qPCR testing of rectal and pen floor samples as diagnostic approaches to detect enterotoxic Escherichia coli in nursery pigs
Weber, N. R.; Nielsen, J. P.; Hjulsager, Charlotte Kristiane
Enterotoxigenic E. coli (ETEC) are a major cause of diarrhoea in weaned pigs. The objective of this study was to evaluate the agreement at pen level among three different diagnostic approaches for the detection of ETEC in groups of nursery pigs with diarrhoea. The diagnostic approaches used were...... to determine the quantity of F18 and F4 genes. The study was carried out in three Danish pig herds and included 31 pens with a pen-level diarrhoea prevalence of > 25%, as well as samples from 93 diarrhoeic nursery pigs from these pens. All E. coli isolates were analysed by PCR and classified as ETEC when genes...... was observed between the detection of ETEC by bacterial culture and qPCR in the same pen floor sample in 26 (83.9%, Kappa = 0.679) pens. Conclusion: We observed an acceptable agreement for the detection of ETEC-positive diarrhoeic nursery pigs in pen samples for both bacterial culture of pen floor samples...
Johannes Vermeer and Tom Gouws: textual discourse through pen ...
Poems such as 'ars poetica' and 'die kantklosser' will be read as speaking paintings of visual texts and visual writing through which an exceptional merger of pen and brush come into being. Keywords: iconicity, tipography, cohesion, visual text, ars poetica, syntactic chiasm, texture, canto. Journal for Language Teaching Vol ...
Q-Step methods for Newton-Jacobi operator equation | Uwasmusi ...
The paper considers the Newton-Jacobi operator equation for the solution of nonlinear systems of equations. Special attention is paid to the computational part of this method with particular reference to the q-step methods. Journal of the Nigerian Association of Mathematical Physics Vol. 8 2004: pp. 237-241Â ...
Evaluation of drop versus trickle-feeding systems for crated or group-penned gestating sows.
Hulbert, L E; McGlone, J J
A total of 160 gilts were used to evaluate the effects of pen vs. crated housing systems and drop- vs. trickle-fed feeding systems on sow productivity, occurrence of lesions during farrowing and weaning, immune measures, and behavioral responses during 2 consecutive gestation periods. Of the 160 eligible gilts, 117 farrowed in parity 1, and of those, 72 farrowed in parity 2. The gilts were randomly assigned to represent 1 of 4 factorially arranged treatment groups: pen drop-fed, crate drop-fed, pen trickle-fed, or crate trickle-fed. Replicate blocks were used for each parity with 5 sows per block initially in each treatment. At weaning, sows housed in pens had greater (P trickle-feeding system. Lesions scores and all other productivity measures did not differ among treatments. An interaction was observed for percentage of neutrophil phagocytosis (P trickle-fed sows, but in crates, drop-fed sows had a tendency for lower phagocytosis than trickle-fed sows. All other immune measures were not different among treatments. The occurrence of oral-nasal-facial (ONF) behaviors (chewing, rooting, and rubbing) and active behaviors increased, and lying behavior decreased (P trickle-feeding systems. None of the environments evaluated were associated with significant physiological stress responses among the sows. Thus, sows were able to adapt within each environment through behavioral mechanisms without the need to invoke major physiological adjustments.
9 CFR 309.7 - Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and driveways.
...; cleaning and disinfection of infected livestock pens and driveways. 309.7 Section 309.7 Animals and Animal... INSPECTION § 309.7 Livestock affected with anthrax; cleaning and disinfection of infected livestock pens and... followed immediately by a thorough disinfection of the exposed premises by soaking the ground, fences...
Forensic Analysis of Blue Ball point Pen Inks on Questioned Documents by High Performance Thin Layer Chromatography Technique (HPTLC)
Lee, L.C.; Siti Mariam Nunurung; Abdul Aziz Ishak
Nowadays, crimes related to forged documents are increasing. Any erasure, addition or modification in the document content always involves the use of writing instrument such as ball point pens. Hence, there is an evident need to develop a fast and accurate ink analysis protocol to solve this problem. This study is aimed to determine the discrimination power of high performance thin layer chromatography (HPTLC) technique for analyzing a set of blue ball point pen inks. Ink samples deposited on paper were extracted using methanol and separated via a solvent mixture of ethyl acetate, methanol and distilled water (70: 35: 30, v/ v/ v). In this method, the discrimination power of 89.40 % was achieved, which confirm that the proposed method was able to differentiate a significant number of pen-pair samples. In addition, composition of blue pen inks was found to be homogeneous (RSD < 2.5 %) and the proposed method showed good repeatability and reproducibility (RSD < 3. 0%). As a conclusion, HPTLC is an effective tool to separate blue ball point pen inks. (author)
Forensic analysis of ballpoint pen inks using paper spray mass spectrometry.
da Silva Ferreira, Priscila; Fernandes de Abreu e Silva, Débora; Augusti, Rodinei; Piccin, Evandro
A novel analytical approach based on paper spray mass spectrometry (PS-MS) is developed for a fast and effective forensic analysis of inks in documents. Ink writings made in ordinary paper with blue ballpoint pens were directly analyzed under ambient conditions without any prior sample preparation. Firstly, the method was explored on a set of distinct pens and the results obtained in the positive ion mode, PS(+)-MS, demonstrated that pens from different brands provide typical profiles. Simple visual inspection of the PS(+)-MS led to the distinction of four different combinations of dyes and additives in the inks. Further discrimination was performed by using the concept of relative ion intensity (RII), owing to the large variability of dyes BV3 and BB26 regarding their demethylated homologues. Following screening and differentiation studies, the composition changes of ink entries subjected to light exposure were also monitored by PS-MS. The results of these tests revealed distinct degradation behaviors which were reflected on the typical chemical profiles of the studied inks, attesting that PS-MS may be also useful to verify the fading of dyes thus allowing the discrimination of entries on a document. As proof of concept experiments, PS-MS was successfully utilized for the analysis of archived documents and characterization of overlapped ink lines made on simulated forged documents.
Fast and exact Newton and Bidirectional fitting of Active Appearance Models.
Kossaifi, Jean; Tzimiropoulos, Yorgos; Pantic, Maja
Active Appearance Models (AAMs) are generative models of shape and appearance that have proven very attractive for their ability to handle wide changes in illumination, pose and occlusion when trained in the wild, while not requiring large training dataset like regression-based or deep learning methods. The problem of fitting an AAM is usually formulated as a non-linear least squares one and the main way of solving it is a standard Gauss-Newton algorithm. In this paper we extend Active Appearance Models in two ways: we first extend the Gauss-Newton framework by formulating a bidirectional fitting method that deforms both the image and the template to fit a new instance. We then formulate a second order method by deriving an efficient Newton method for AAMs fitting. We derive both methods in a unified framework for two types of Active Appearance Models, holistic and part-based, and additionally show how to exploit the structure in the problem to derive fast yet exact solutions. We perform a thorough evaluation of all algorithms on three challenging and recently annotated inthe- wild datasets, and investigate fitting accuracy, convergence properties and the influence of noise in the initialisation. We compare our proposed methods to other algorithms and show that they yield state-of-the-art results, out-performing other methods while having superior convergence properties.
Status of the Penning trap project in Munich
Szerypo, J.; Kolhinen, V.S.; Gartzke, E.; Habs, D.; Neumayr, J.; Schuermann, C.; Sewtz, M.; Thirolf, P.G.; Bussmann, M.; Schramm, U.
The MLLTRAP at the Maier-Leibnitz-Laboratory (Garching) is a new Penning trap facility designed to combine several novel technologies to decelerate, charge breed, cool, bunch and purify the reaction products and perform high-accuracy nuclear and atomic mass measurements. It is now in the commissioning phase, achieving a mass-resolving power of about 10 5 in the purification trap for stable ions. (orig.)
Szerypo, J.; Kolhinen, V.S.; Gartzke, E.; Habs, D.; Neumayr, J.; Schuermann, C.; Sewtz, M.; Thirolf, P.G. [University of Munich (LMU) and Maier-Leibnitz-Laboratory (MLL), Faculty of Physics, Garching (Germany); Bussmann, M.; Schramm, U. [University of Munich (LMU) and Maier-Leibnitz-Laboratory (MLL), Faculty of Physics, Garching (Germany); Forschungszentrum Dresden-Rossendorf, Dresden (Germany)
The MLLTRAP at the Maier-Leibnitz-Laboratory (Garching) is a new Penning trap facility designed to combine several novel technologies to decelerate, charge breed, cool, bunch and purify the reaction products and perform high-accuracy nuclear and atomic mass measurements. It is now in the commissioning phase, achieving a mass-resolving power of about 10{sup 5} in the purification trap for stable ions. (orig.)
The Use of Kruskal-Newton Diagrams for Differential Equations
Fishaleck, T.; White, R.B.
The method of Kruskal-Newton diagrams for the solution of differential equations with boundary layers is shown to provide rapid intuitive understanding of layer scaling and can result in the conceptual simplification of some problems. The method is illustrated using equations arising in the theory of pattern formation and in plasma physics.
Newton's Laws, Euler's Laws and the Speed of Light
Whitaker, Stephen
Chemical engineering students begin their studies of mechanics in a department of physics where they are introduced to the mechanics of Newton. The approach presented by physicists differs in both perspective and substance from that encountered in chemical engineering courses where Euler's laws provide the foundation for studies of fluid and solid…
Tracking Color Shift in Ballpoint Pen Ink Using Photoshop Assisted Spectroscopy: A Nondestructive Technique Developed to Rehouse a Nobel Laureate's Manuscript.
Wright, Kristi; Herro, Holly
Many historically and culturally significant documents from the mid-to-late twentieth century were written in ballpoint pen inks, which contain light-sensitive dyes that present problems for collection custodians and paper conservators. The conservation staff at the National Library of Medicine (NLM), National Institutes of Health, conducted a multiphase project on the chemistry and aging of ballpoint pen ink that culminated in the development of a new method to detect aging of ballpoint pen ink while examining a variety of storage environments. NLM staff determined that ballpoint pen ink color shift can be detected noninvasively using image editing software. Instructions are provided on how to detect color shift in digitized materials using a technique developed specifically for this project-Photoshop Assisted Spectroscopy. 1 The study results offer collection custodians storage options for historic documents containing ballpoint pen ink.
Pen Branch fault: Confirmatory drilling results
Stieve, A.; Coruh, C.; Costain, J.K.
The Confirmatory Drilling Project is the final investigation under the Pen Branch Fault Program initiated to determine the capability of the Pen Branch fault (PBF) to release seismic energy. This investigation focused on a small zone over the fault where previously collected seismic reflection data had indicated the fault deforms the subsurface at 150 msec (with reference to an 80 m reference datum). Eighteen drill holes, 2 to basement and the others to 300 ft, were arranged in a scatter pattern over the fault. To adequately define configuration of the layers deformed by the fault boreholes were spaced over a zone of 800 ft, north to south. The closely spaced data were to confirm or refute the existence of flat lying reflectors observed in seismic reflection data and to enable the authors to identify and correlate lithologic layers with seismic reflection data. Results suggest that deformation by the fault in sediments 300 ft deep ad shallower is subtle. Corroboration of the geologic interpretation with the seismic reflection profile is ongoing but preliminary results indicate that specific reflectors can be assigned to lithologic layers. A large amplitude package of reflections below a flat lying continuous reflection at 40 msec can be correlated with a lithology that corresponds to carbonate sediments in geologic cross-section. Further, data also show that a geologic layer as shallow as 30 ft can be traced on these seismic data over the same subsurface distance where geologic cross-section shows corresponding continuity. The subsurface structure is thus corroborated by both methods at this study site
The role of competing knowledge structures in undermining learning: Newton's second and third laws
Low, David J.; Wilson, Kate F.
We investigate the development of student understanding of Newton's laws using a pre-instruction test (the Force Concept Inventory), followed by a series of post-instruction tests and interviews. While some students' somewhat naive, pre-existing models of Newton's third law are largely eliminated following a semester of teaching, we find that a particular inconsistent model is highly resilient to, and may even be strengthened by, instruction. If test items contain words that cue students to think of Newton's second law, then students are more likely to apply a "net force" approach to solving problems, even if it is inappropriate to do so. Additional instruction, reinforcing physical concepts in multiple settings and from multiple sources, appears to help students develop a more connected and consistent level of understanding. We recommend explicitly encouraging students to check their work for consistency with physical principles, along with the standard checks for dimensionality and order of magnitude, to encourage reflective and rigorous problem solving.
Paul Wittgenstein's right arm and his phantom: the saga of a famous concert pianist and his amputation.
Boller, François; Bogousslavsky, Julien
Reports of postamputation pain and problems linked to phantom limbs have increased in recent years, particularly in relation to war-related amputations. These problems are still poorly understood and are considered rather mysterious, and they are difficult to treat. In addition, they may shed light on brain physiology and neuropsychology. Functional neuroimaging techniques now enable us to better understand their pathophysiology and to consider new rehabilitation techniques. Several artists have suffered from postamputation complications and this has influenced not only their personal life but also their artistic work. Paul Wittgenstein (1887-1961), a pianist whose right arm was amputated during the First World War, became a famous left-handed concert performer. His case provides insight into Post-World War I musical and political history. More specifically, the impact on the artistic life of this pianist illustrates various postamputation complications, such as phantom limb, stump pain, and especially moving phantom. The phantom movements of his right hand helped him develop the dexterity of his left hand. Wittgenstein played piano works that were written especially for him (the most famous being Ravel's Concerto for the Left Hand) and composed some of his own. Additionally, several famous composers had previously written for the left hand. © 2015 Elsevier B.V. All rights reserved.
Producción de entropía y ley de enfriamiento de newton
Barragán, Daniel
Para un sistema con una fuente interna de generación de calor se analizan, en el marco de la termodinámica de los procesos irreversibles, las ecuaciones evolutivas que describen la transferencia de calor según la ley de enfriamiento de Newton. A partir del balance de flujo de entropía se muestra que la generación de entropía no es mínima en el estado estacionario descrito por la ley de enfriamiento de Newton. Igualmente, se discute cómo realizar el balance de flujos en el sistema, su conex...
HIGH-RESOLUTION XMM-NEWTON SPECTROSCOPY OF THE COOLING FLOW CLUSTER A3112
Bulbul, G. Esra; Smith, Randall K.; Foster, Adam [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Cottam, Jean; Loewenstein, Michael; Mushotzky, Richard; Shafer, Richard, E-mail: [email protected] [NASA Goddard Space Flight Center, Greenbelt, MD (United States)
We examine high signal-to-noise XMM-Newton European Photon Imaging Camera (EPIC) and Reflection Grating Spectrometer (RGS) observations to determine the physical characteristics of the gas in the cool core and outskirts of the nearby rich cluster A3112. The XMM-Newton Extended Source Analysis Software data reduction and background modeling methods were used to analyze the XMM-Newton EPIC data. From the EPIC data, we find that the iron and silicon abundance gradients show significant increase toward the center of the cluster while the oxygen abundance profile is centrally peaked but has a shallower distribution than that of iron. The X-ray mass modeling is based on the temperature and deprojected density distributions of the intracluster medium determined from EPIC observations. The total mass of A3112 obeys the M-T scaling relations found using XMM-Newton and Chandra observations of massive clusters at r{sub 500}. The gas mass fraction f{sub gas} = 0.149{sup +0.036}{sub -0.032} at r{sub 500} is consistent with the seven-year Wilkinson Microwave Anisotropy Probe results. The comparisons of line fluxes and flux limits on the Fe XVII and Fe XVIII lines obtained from high-resolution RGS spectra indicate that there is no spectral evidence for cooler gas associated with the cluster with temperature below 1.0 keV in the central <38'' ({approx}52 kpc) region of A3112. High-resolution RGS spectra also yield an upper limit to the turbulent motions in the compact core of A3112 (206 km s{sup -1}). We find that the contribution of turbulence to total energy is less than 6%. This upper limit is consistent with the energy contribution measured in recent high-resolution simulations of relaxed galaxy clusters.
A direct Newton-Raphson economic dispatch
Lin, C.E.; Chen, S.T.; Huang, C.L.
This paper presents a new method to solve the real-time economic dispatch problem using an alternative Jacobian matrix considering system constraints. The transition loss is approximately expressed in terms of generating powers and the generalized generation shift distribution factor. Based on this expression, a set of simultaneous equations of Jacobian matrix is formulated and solved by the Newton-Raphson method. The proposed method eliminates the penalty factor calculation, and solves the economic dispatch directly. The proposed method obtains very fast solution speed and maintains good accuracy from test examples. It is good approach to solve the economic dispatch problem
Newton's Investigation of the Resistance to Moving Bodies in Continuous Fluids and the Nature of "Frontier Science"
Newton's experiments into the resistance which fluids offer to moving bodies provide some insight into the way he related theory and experiment. His theory demonstrates a way of thought typical of 17th century physics and his experiments are simple enough to be replicated by present day students. Newton's investigations using pendulums were…
Fabrication of a pen-shaped portable biochemical reaction system based on magnetic bead manipulation
Shikida, Mitsuhiro; Inagaki, Noriyuki; Okochi, Mina; Honda, Hiroyuki; Sato, Kazuo
A pen-shaped platform that is similar to a mechanical pencil is proposed for producing a portable reaction system. A reaction unit, as the key component in the system, was produced by using a heat shrinkable tube. A mechanical pencil supplied by Mitsubishi Pencil Co. Ltd was used as the pen-shaped platform for driving the reaction cylinder. It was actuated using an inchworm motion. We confirmed that the magnetic beads were successfully manipulated in the droplet in the cylinder-shaped reaction units. (technical note)
Quantum Mechanics from Newton's Second Law and the Canonical Commutation Relation [X,P]=i
Palenik, Mark C.
Despite the fact that it has been known since the time of Heisenberg that quantum operators obey a quantum version of Newton's laws, students are often told that derivations of quantum mechanics must necessarily follow from the Hamiltonian or Lagrangian formulations of mechanics. Here, we first derive the existing Heisenberg equations of motion from Newton's laws and the uncertainty principle using only the equations $F=\\frac{dP}{dt}$, $P=m\\frac{dV}{dt}$, and $\\left[X,P\\right]=i$. Then, a new...
Adaptation of XMM-Newton SAS to GRID and VO architectures via web
Ibarra, A.; de La Calle, I.; Gabriel, C.; Salgado, J.; Osuna, P.
The XMM-Newton Scientific Analysis Software (SAS) is a robust software that has allowed users to produce good scientific results since the beginning of the mission. This has been possible given the SAS capability to evolve with the advent of new technologies and adapt to the needs of the scientific community. The prototype of the Remote Interface for Science Analysis (RISA) presented here, is one such example, which provides remote analysis of XMM-Newton data with access to all the existing SAS functionality, while making use of GRID computing technology. This new technology has recently emerged within the astrophysical community to tackle the ever lasting problem of computer power for the reduction of large amounts of data.
A multigrid Newton-Krylov method for flux-limited radiation diffusion
Rider, W.J.; Knoll, D.A.; Olson, G.L.
The authors focus on the integration of radiation diffusion including flux-limited diffusion coefficients. The nonlinear integration is accomplished with a Newton-Krylov method preconditioned with a multigrid Picard linearization of the governing equations. They investigate the efficiency of the linear and nonlinear iterative techniques
2016 Newton County, Georgia ADS100 4-Band 8 Bit Imagery
National Oceanic and Atmospheric Administration, Department of Commerce — This data set consists of 0.5-foot pixel resolution, natural color orthoimages covering Newton County, Georgia. An orthoimage is remotely sensed image data in which...
Newton's second law and the multiplication of distributions
Sarrico, C. O. R.; Paiva, A.
Newton's second law is applied to study the motion of a particle subjected to a time dependent impulsive force containing a Dirac delta distribution. Within this setting, we prove that this problem can be rigorously solved neither by limit processes nor by using the theory of distributions (limited to the classical Schwartz products). However, using a distributional multiplication, not defined by a limit process, a rigorous solution emerges.
Inexact proximal Newton methods for self-concordant functions
Li, Jinchao; Andersen, Martin Skovgaard; Vandenberghe, Lieven
with an application to L1-regularized covariance selection, in which prior constraints on the sparsity pattern of the inverse covariance matrix are imposed. In the numerical experiments the proximal Newton steps are computed by an accelerated proximal gradient method, and multifrontal algorithms for positive definite...... matrices with chordal sparsity patterns are used to evaluate gradients and matrix-vector products with the Hessian of the smooth component of the objective....
A Magnetic Set-Up to Help Teach Newton's Laws
Panijpan, Bhinyo; Sujarittham, Thanida; Arayathanitkul, Kwan; Tanamatayarat, Jintawat; Nopparatjamjomras, Suchai
A set-up comprising a magnetic disc, a solenoid and a mechanical balance was used to teach first-year physics students Newton's third law with the help of a free body diagram. The image of a floating magnet immobilized by the solenoid's repulsive force should help dispel a common misconception of students as regards the first law: that stationary…
Accelerating Inexact Newton Schemes for Large Systems of Nonlinear Equations
Fokkema, D.R.; Sleijpen, G.L.G.; Vorst, H.A. van der
Classical iteration methods for linear systems, such as Jacobi iteration, can be accelerated considerably by Krylov subspace methods like GMRES. In this paper, we describe how inexact Newton methods for nonlinear problems can be accelerated in a similar way and how this leads to a general
Newton Power Flow Methods for Unbalanced Three-Phase Distribution Networks
Sereeter, B.; Vuik, C.; Witteveen, C.
Two mismatch functions (power or current) and three coordinates (polar, Cartesian andcomplex form) result in six versions of the Newton–Raphson method for the solution of powerflow problems. In this paper, five new versions of the Newton power flow method developed forsingle-phase problems in our
Newtonian cosmology Newton would understand
Lemons, D.S.
Isaac Newton envisioned a static, infinite, and initially uniform, zero field universe that was gravitationally unstable to local condensations of matter. By postulating the existence of such a universe and using it as a boundary condition on Newtonian gravity, a new field equation for gravity is derived, which differs from the classical one by a time-dependent cosmological term proportional to the average mass density of the universe. The new field equation not only makes Jeans' analysis of the gravitational instability of a Newtonian universe consistent, but also gives rise to a family of Newtonian evolutionary cosmologies parametrized by a time-invariant expansion velocity. This Newtonian cosmology contrasts with both 19th-century ones and with post general relativity Newtonian cosmology
Enterotoxigenic E. coli (ETEC) are a major cause of diarrhoea in weaned pigs. The objective of this study was to evaluate the agreement at pen level among three different diagnostic approaches for the detection of ETEC in groups of nursery pigs with diarrhoea. The diagnostic approaches used were......: bacterial culturing of faecal samples from three pigs (per pen) with clinical diarrhoea and subsequent testing for virulence genes in E. coli isolates; bacterial culturing of pen floor samples and subsequent testing for virulence genes in E. coli isolates; qPCR testing of pen floor samples in order...... to determine the quantity of F18 and F4 genes. The study was carried out in three Danish pig herds and included 31 pens with a pen-level diarrhoea prevalence of > 25%, as well as samples from 93 diarrhoeic nursery pigs from these pens. All E. coli isolates were analysed by PCR and classified as ETEC when genes...
Is 27 really a dangerous age for famous musicians? Retrospective cohort study.
Wolkewitz, Martin; Allignol, Arthur; Graves, Nicholas; Barnett, Adrian G
To test the "27 club" hypothesis that famous musicians are at an increased risk of death at age 27. Design Cohort study using survival analysis with age as a time dependent exposure. Comparison was primarily made within musicians, and secondarily relative to the general UK population. The popular music scene from a UK perspective. Musicians (solo artists and band members) who had a number one album in the UK between 1956 and 2007 (n = 1046 musicians, with 71 deaths, 7%). Risk of death by age of musician, accounting for time dependent study entry and the number of musicians at risk. Risk was estimated using a flexible spline which would allow a bump at age 27 to appear. We identified three deaths at age 27 amongst 522 musicians at risk, giving a rate of 0.57 deaths per 100 musician years. Similar death rates were observed at ages 25 (rate = 0.56) and 32 (0.54). There was no peak in risk around age 27, but the risk of death for famous musicians throughout their 20s and 30s was two to three times higher than the general UK population. The 27 club is unlikely to be a real phenomenon. Fame may increase the risk of death among musicians, but this risk is not limited to age 27.
You err, Einstein.. Newton, Einstein, Heisenberg, and Feynman discuss quantum physics
Fritzsch, Harald
Harald Fritzsch and his star physicists Einstein, Heisenberg, and Feynman explain the central concept of nowadays physics, quantum mechanics, without it nothing goes in modern world. And the great Isaac newton puts the questions, which all would put
Patient evaluation of the use of follitropin alfa in a prefilled ready-to-use injection pen in assisted reproductive technology: an observational study
Welcker J
Full Text Available Abstract Background Self-administration of recombinant human follicle-stimulating hormone (r-hFSH can be performed using injection pen devices by women undergoing assisted reproductive technology procedures. The objective of this study was to explore the use of the prefilled follitropin alfa pen in routine assisted reproductive technology procedures in Germany. Methods This prospective, observational study was conducted across 43 German IVF centres over a period of 1.75 years. Patients who had used the prefilled follitropin alfa pen in the current or a previous cycle of controlled ovarian stimulation completed a questionnaire to assess their opinions of the device. Results A total of 5328 patients were included in the study. Of these, 2888 reported that they had previous experience of daily FSH injections. Significantly more patients reported that less training was required to use the prefilled follitropin alfa pen than a syringe and lyophilized powder (1997/3081 [64.8%]; p Conclusions In this questionnaire-based survey, routine use of the prefilled follitropin alfa pen was well accepted and associated with favourable patient perceptions. Users of the pen found it easier to initially learn how to use, and subsequently use, than other injection methods. In general, the prefilled follitropin alfa pen was the preferred method for self-administration of gonadotrophins. Together with previous findings, the results here indicate a high level of patient satisfaction among users of the prefilled follitropin alfa pen for daily self-administration of r-hFSH.
Parallel Newton-Krylov-Schwarz algorithms for the transonic full potential equation
Cai, Xiao-Chuan; Gropp, William D.; Keyes, David E.; Melvin, Robin G.; Young, David P.
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The overall algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, is robust and, economical for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report their effect on numerical convergence rate, overall execution time, and parallel efficiency on a distributed-memory parallel computer.
Newton-Krylov-Schwarz algorithms for the 2D full potential equation
Cai, Xiao-Chuan [Univ. of Colorado, Boulder, CO (United States); Gropp, W.D. [Argonne National Lab., IL (United States); Keyes, D.E. [Old Dominion Univ. Norfolk, VA (United States)] [and others
We study parallel two-level overlapping Schwarz algorithms for solving nonlinear finite element problems, in particular, for the full potential equation of aerodynamics discretized in two dimensions with bilinear elements. The main algorithm, Newton-Krylov-Schwarz (NKS), employs an inexact finite-difference Newton method and a Krylov space iterative method, with a two-level overlapping Schwarz method as a preconditioner. We demonstrate that NKS, combined with a density upwinding continuation strategy for problems with weak shocks, can be made robust for this class of mixed elliptic-hyperbolic nonlinear partial differential equations, with proper specification of several parameters. We study upwinding parameters, inner convergence tolerance, coarse grid density, subdomain overlap, and the level of fill-in in the incomplete factorization, and report favorable choices for numerical convergence rate and overall execution time on a distributed-memory parallel computer.
Review: Factors affecting fouling in conventional pens for slaughter pigs
Larsen, Mona Lilian Vestbjerg; Bertelsen, Maja; Pedersen, Lene Juul
and pigs' earlier experience. Further, these primary factors are affected by secondary factors such as the shape of the pen, the weight of the pigs and especially the heat balance of the pigs, which is affected by several tertiary factors including, for example, temperature, humidity and draught. Results...
Dynamics of a single ion in a perturbed Penning trap: Octupolar perturbation
Lara, Martin; Salas, J. Pablo
Imperfections in the design or implementation of Penning traps may give rise to electrostatic perturbations that introduce nonlinearities in the dynamics. In this paper we investigate, from the point of view of classical mechanics, the dynamics of a single ion trapped in a Penning trap perturbed by an octupolar perturbation. Because of the axial symmetry of the problem, the system has two degrees of freedom. Hence, this model is ideal to be managed by numerical techniques like continuation of families of periodic orbits and Poincare surfaces of section. We find that, through the variation of the two parameters controlling the dynamics, several periodic orbits emanate from two fundamental periodic orbits. This process produces important changes (bifurcations) in the phase space structure leading to chaotic behavior
Numerical evaluation of general n-dimensional integrals by the repeated use of Newton-Cotes formulas
Nihira, Takeshi; Iwata, Tadao.
The composites Simpson's rule is extended to n-dimensional integrals with variable limits. This extension is illustrated by means of the recursion relation of n-fold series. The structure of calculation by the Newton-Cotes formulas for n-dimensional integrals is clarified with this method. A quadrature formula corresponding to the Newton-Cotes formulas can be readily constructed. The results computed for some examples are given, and the error estimates for two or three dimensional integrals are described using the error term. (author)
Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.
Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika
Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.
Implementing WebQuest Based Instruction on Newton's Second Law
Gokalp, Muhammed Sait; Sharma, Manjula; Johnston, Ian; Sharma, Mia
The purpose of this study was to investigate how WebQuests can be used in physics classes for teaching specific concepts. The study had three stages. The first stage was to develop a WebQuest on Newton's second law. The second stage involved developing a lesson plan to implement the WebQuest in class. In the final stage, the WebQuest was…
Tracking Color Shift in Ballpoint Pen Ink Using Photoshop Assisted Spectroscopy: A Nondestructive Technique Developed to Rehouse a Nobel Laureate's Manuscript
Many historically and culturally significant documents from the mid-to-late twentieth century were written in ballpoint pen inks, which contain light-sensitive dyes that present problems for collection custodians and paper conservators. The conservation staff at the National Library of Medicine (NLM), National Institutes of Health, conducted a multiphase project on the chemistry and aging of ballpoint pen ink that culminated in the development of a new method to detect aging of ballpoint pen ...
Nest building and posture changes and activity budget of gilts housed in pens and crates
Andersen, Inger Lise; Vasdal, Guro; Pedersen, Lene Juul
was born until 8 h after the birth of the first piglet. There was no significant effect of the sows breeding value on any of the sow behaviours. Sows housed in pens spent significantly more time nest building than crated sows from 4 to 12 h prepartum (P ...The aim of the present work was to study nest building, posture changes and the overall activity budget of gilts in pens vs. crates. Twenty-three HB gilts (high piglet survival day 5) and 21 LB gilts (low piglet survival day 5) were video recorded from day 110 in pregnancy to four days after...... farrowing in either a farrowing pen or farrowing crate. The gilts were provided with 2 kg of chopped straw daily from day 113 of pregnancy until farrowing in both environments. Nest building and other activity measures of the sows were analysed using continuous sampling the last 12 h before the first piglet...
Xenon-based Penning mixtures for proportional counters
Ramsey, B.D.; Agrawal, P.C.; National Aeronautics and Space Administration, Huntsville, AL
The choice of quench gas can have a significant effect on the gas gain and energy resolution of gas-filed proportional counters. Details are given on the performance obtained with a variety of quench additives of varying ionization potentials for use in xenon-filled systems. It is confirmed that optimum performance is obtained when the ionization potential is closely matched to the first metastable level of xenon (8.3 eV) as is the case with xenon + trimethylamine and xenon + dimethylamine. For these mixtures the Penning effect is at its strongest. (orig.)
Mechanism of force mode dip-pen nanolithography
Yang, Haijun, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Key Laboratory for Thin Film and Microfabrication of the Ministry of Education, Research Institute of Micro/Nano Science and Technology, Shanghai Jiao Tong University, Shanghai 200240 (China); Interfacial Water Division and Key Laboratory of Interfacial Physics and Technology, Shanghai Institute of Applied Physics, CAS, Shanghai 201800 (China); Xie, Hui; Rong, Weibin; Sun, Lining [State Key Laboratory of Robotics and Systems, Harbin Institute of Technology, Harbin 150080 (China); Wu, Haixia; Guo, Shouwu, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Key Laboratory for Thin Film and Microfabrication of the Ministry of Education, Research Institute of Micro/Nano Science and Technology, Shanghai Jiao Tong University, Shanghai 200240 (China); Wang, Huabin, E-mail: [email protected], E-mail: [email protected], E-mail: [email protected] [Centre for Tetrahertz Research, Chongqing Institute of Green and Intelligent Technology, Chinese Academy of Sciences, Chongqing 400714 (China)
In this work, the underlying mechanism of the force mode dip-pen nanolithography (FMDPN) is investigated in depth by analyzing force curves, tapping mode deflection signals, and "Z-scan� voltage variations during the FMDPN. The operation parameters including the relative "trigger threshold� and "surface delay� parameters are vital to control the loading force and dwell time for ink deposition during FMDPN. A model is also developed to simulate the interactions between the atomic force microscope tip and soft substrate during FMDPN, and verified by its good performance in fitting our experimental data.
Newton-like methods for Navier-Stokes solution
Qin, N.; Xu, X.; Richards, B. E.
The paper reports on Newton-like methods called SFDN-alpha-GMRES and SQN-alpha-GMRES methods that have been devised and proven as powerful schemes for large nonlinear problems typical of viscous compressible Navier-Stokes solutions. They can be applied using a partially converged solution from a conventional explicit or approximate implicit method. Developments have included the efficient parallelization of the schemes on a distributed memory parallel computer. The methods are illustrated using a RISC workstation and a transputer parallel system respectively to solve a hypersonic vortical flow.
Constraining the neutron star equation of state using XMM-Newton
Kaastra, J.; Mendez, M.; In 't Zand, J. J. M.; Jonker, P.G.
We have identified three possible ways in which future XMM-Newton observations can provide significant constraints on the equation of state of neutron stars. First, using a long observation of the neutron star X-ray transient Cen X-4 in quiescence one can use the RGS spectrum to constrain the
Jonker, P.G.; Kaastra, J.S.; Méndez, M.; in 't Zand, J.J.M.
Design of reciprocal unit based on the Newton-Raphson approximation
Gundersen, Anders Torp; Winther-Almstrup, Rasmus; Boesen, Michael
A design of a reciprocal unit based on Newton-Raphson approximation is described and implemented. We present two different designs for single precisions where one of them is extremely fast but the trade-off is an increase in area. The solution behind the fast design is that the design is fully...
Waveform control for magnetic testers using a quasi-Newton method
Yamamoto, Ken-ichi; Hanba, Shigeru
A nonlinear iterative learning algorithm is proposed to make a voltage waveform in the secondary coil sinusoidal in this paper. The algorithm employs a globally convergent Jacobian-free quasi-Newton type solver that has a BFGS-like structure. This method functions well, and it is demonstrated using typical soft magnetic materials
Newton-sor iterative method for solving the two-dimensional porous ...
In this paper, we consider the application of the Newton-SOR iterative method in obtaining the approximate solution of the two-dimensional porous medium equation (2D PME). The nonlinear finite difference approximation equation to the 2D PME is derived by using the implicit finite difference scheme. The developed ...
A Framework for Generalising the Newton Method and Other Iterative Methods from Euclidean Space to Manifolds
Manton, Jonathan H.
The Newton iteration is a popular method for minimising a cost function on Euclidean space. Various generalisations to cost functions defined on manifolds appear in the literature. In each case, the convergence rate of the generalised Newton iteration needed establishing from first principles. The present paper presents a framework for generalising iterative methods from Euclidean space to manifolds that ensures local convergence rates are preserved. It applies to any (memoryless) iterative m...
[Effects of large bio-manipulation fish pen on community structure of crustacean zooplankton in Meiliang Bay of Taihu Lake].
Ke, Zhi-Xin; Xie, Ping; Guo, Long-Gen; Xu, Jun; Zhou, Qiong
In 2005, a large bio-manipulation pen with the stock of silver carp and bighead carp was built to control the cyanobacterial bloom in Meiliang Bay of Taihu Lake. This paper investigated the seasonal variation of the community structure of crustacean zooplankton and the water quality within and outside the pen. There were no significant differences in the environmental parameters and phytoplankton biomass within and outside the pen. The species composition and seasonal dynamics of crustacean zooplankton within and outside the pen were similar, but the biomass of crustacean zooplankton was greatly suppressed by silver carp and bighead carp. The total crustacean zooplankton biomass and cladocerans biomass were significantly lower in the pen (P < 0.05). In general, silver carp and bighead carp exerted more pressure on cladoceran species than on copepod species. A distinct seasonal succession of crustacean zooplankton was observed in the Bay. Many crustacean species were only dominated in given seasons. Large-sized crustacean (mainly Daphnia sp. and Cyclops vicnus) dominated in winter and spring, while small-sized species (mainly Bosmina sp., Ceriodaphnia cornuta, and Limnoithona sinensis) dominated in summer and autumn. Canonical correspondence analysis showed that water transparency, temperature, and phytoplankton biomass were the most important factors affecting the seasonal succession of the crustacean.
Micro Penning Trap for Continuous Magnetic Field Monitoring in High Radiation Environments
Latorre, Javiera; Bollen, Georg; Gulyuz, Kerim; Ringle, Ryan; Bado, Philippe; Dugan, Mark; Lebit Team; Translume Collaboration
As new facilities for rare isotope beams, like FRIB at MSU, are constructed, there is a need for new instrumentation to monitor magnetic fields in beam magnets that can withstand the higher radiation level. Currently NMR probes, the instruments used extensively to monitor magnetic fields, do not have a long lifespans in radiation-high environments. Therefore, a radiation-hard replacement is needed. We propose to use Penning trap mass spectrometry techniques to make high precision magnetic field measurements. Our Penning microtrap will be radiation resistant as all of the vital electronics will be at a safe distance from the radiation. The trap itself is made from materials not subject to radiation damage. Penning trap mass spectrometers can determine the magnetic field by measuring the cyclotron frequency of an ion with a known mass and charge. This principle is used on the Low Energy Beam Ion Trap (LEBIT) minitrap at NSCL which is the foundation for the microtrap. We have partnered with Translume, who specialize in glass micro-fabrication, to develop a microtrap in fused-silica glass. A microtrap is finished and ready for testing at NSCL with all of the electronic and hardware components setup. DOE Phase II SBIR Award No. DE-SC0011313, NSF Award Number 1062410 REU in Physics, NSF under Grant No. PHY-1102511.
Searching for propeller-phase ULXs in the XMM-Newton Serendipitous Source Catalogue
Earnshaw, H. P.; Roberts, T. P.; Sathyaprakash, R.
We search for transient sources in a sample of ultraluminous X-ray sources (ULXs) from the 3XMM-DR4 release of the XMM-Newton Serendipitous Source Catalogue in order to find candidate neutron star ULXs alternating between an accreting state and the propeller regime, in which the luminosity drops dramatically. By examining their fluxes and flux upper limits, we identify five ULXs that demonstrate long-term variability of over an order of magnitude. Using Chandra and Swift data to further characterize their light curves, we find that two of these sources are detected only once and could be X-ray binaries in outburst that only briefly reach ULX luminosities. Two others are consistent with being super-Eddington accreting sources with high levels of inter-observation variability. One source, M51 ULX-4, demonstrates apparent bimodal flux behaviour that could indicate the propeller regime. It has a hard X-ray spectrum, but no significant pulsations in its timing data, although with an upper limit of 10 per cent of the signal pulsed at ˜1.5 Hz a pulsating ULX cannot be excluded, particularly if the pulsations are transient. By simulating XMM-Newton observations of a population of pulsating ULXs, we predict that there could be approximately 200 other bimodal ULXs that have not been observed sufficiently well by XMM-Newton to be identified as transient.
An approximate block Newton method for coupled iterations of nonlinear solvers: Theory and conjugate heat transfer applications
Yeckel, Andrew; Lun, Lisa; Derby, Jeffrey J.
A new, approximate block Newton (ABN) method is derived and tested for the coupled solution of nonlinear models, each of which is treated as a modular, black box. Such an approach is motivated by a desire to maintain software flexibility without sacrificing solution efficiency or robustness. Though block Newton methods of similar type have been proposed and studied, we present a unique derivation and use it to sort out some of the more confusing points in the literature. In particular, we show that our ABN method behaves like a Newton iteration preconditioned by an inexact Newton solver derived from subproblem Jacobians. The method is demonstrated on several conjugate heat transfer problems modeled after melt crystal growth processes. These problems are represented by partitioned spatial regions, each modeled by independent heat transfer codes and linked by temperature and flux matching conditions at the boundaries common to the partitions. Whereas a typical block Gauss-Seidel iteration fails about half the time for the model problem, quadratic convergence is achieved by the ABN method under all conditions studied here. Additional performance advantages over existing methods are demonstrated and discussed.
A new era for French far right politics?: Comparing the FN under two Le Pens Uma nova era para a extrema-direita francesa?: Uma comparação entre a Frente Nacional dos dois Le Pen
Michelle Hale Williams
Full Text Available With 2012 elections looming on the horizon in France, much political attention has focused on the new leader of the National Front, Marine Le Pen. She is polling quite well outpacing many of her mainstream party candidate rivals for the 2012 French presidency and the public appears to have embraced her with open arms. Hailed as a promising new face of French politics, a wide swath of the French electorate indicates confidence in her ability to bring needed changes to France. Yet does she really represent a dramatic departure from former FN policies and positions? This article examines the model of the FN during the leadership of Jean-Marie Le Pen in comparison with that seen in the first eight months of Marine Le Pen's leadership in order to address this question.Com a iminência das eleições francesas de 2012, muita da atenção política se tem centrado na nova líder da Frente Nacional, Marine Le Pen, que tem tido um desempenho notável nas sondagens, ultrapassando muitos dos seus rivais dos partidos tradicionais na corrida para a presidência francesa, e sendo aparentemente acolhida de braços abertos por grande parte do eleitorado. Considerada uma nova e promissora figura no panorama político francês, uma larga fatia do eleitorado demonstra confiança na sua capacidade para trazer ao país a mudança necessária. Mas será que ela representa, de facto, uma ruptura com as antigas posições políticas da FN? O presente artigo examina o modelo da FN sob a liderança de Jean-Marie Le Pen comparando-o com o dos primeiros oito meses da liderança de Marine Le Pen por forma a analisar esta questão.
Supporting the learning of Newton's laws with graphical data
Piggott, David
Teaching physics provides the opportunity for a very unique interaction between students and instructor that is not found in chemistry or biology. Physics has a heavy emphasis on trying to alter students' misconceptions about how things work in the real word. In chemistry and microbiology this is not an issue because the topics of discussion in those classes are a new experience for the students. In the case of physics the students have everyday experience with the different concepts discussed. This causes the students to build incorrect mental models explaining how different things work. In order to correct these mental models physics teachers must first get the students to vocalize these misconceptions. Then the teacher must confront the students with an example that exposes the false nature of their model. Finally, the teacher must help the student resolve these discrepancies and form the correct model. This study attempts to resolve these discrepancies by giving the students concrete evidence via graphs of Newton's laws. The results reported here indicate that this method of eliciting the misconception, confronting the misconception, and resolving the misconception is successful with Newton's third law, but only marginally successful for first and second laws.
Gravitation: Field theory par excellence Newton, Einstein, and beyond
Yilmaz, H.
Newtonian gravity satifies the two principles of equivalence m/sub i/ = m/sub p/ (the passive principle) and m/sub a/ = m/sub p/ (the active principle). A relativistic gauge field concept in D = s+1 dimensional curved-space will, in general, violate these two principles as in m/sub p/ = αm/sub i/, m/sub a/ = lambdam/sub p/ where α = D: 3 and lambda measures the presence of the field stress-energy t/sup ν//sub μ/ in the field equations. It is shown that α = 1, lambda = 0 corresponds to general relativity and α = 1, lambda = 1 to the theory of the author. It is noted that the correspondence limit of general relativity is not Newton's theory but a theory suggested by Robert Hooke a few years before Newton published his in Principia. The gauge is independent of the two principles but had to do with local special relativistic correspondence and compatibility with quantum mechanics. It is shown that unless α = 1, lambda = 1 the generalized theory cannot predict correctly many observables effects, including the 532'' per century Newtonian part in Mercury's perihelion advance
XMM-Newton detects X-ray 'solar cycle' in distant star
The Sun as observed by SOHO hi-res Size hi-res: 708 Kb The Sun as observed by SOHO The Sun as observed by the ESA/NASA SOHO observatory near the minimum of the solar cycle (left) and near its maximum (right). The signs of solar activity near the maximum are clearly seen. New XMM-Newton observations suggest that this behaviour may be typical of stars like the Sun, such as HD 81809 in the constellation Hydra. Solar flare - 4 November 2003 The huge flare produced on 4 November 2003 This image of the Sun, obtained by the ESA/NASA SOHO observatory, shows the powerful X-ray flare that took place on 4 November 2003. The associated coronal mass ejection, coming out of the Sun at a speed of 8.2 million kilometres per hour, hit the Earth several hours later and caused disruptions to telecommunication and power distribution lines. New XMM-Newton observations suggest that this behaviour may be typical of stars like the Sun, such as HD 81809 in the constellation Hydra. Since the time Galileo discovered sunspots, in 1610, astronomers have measured their number, size and location on the disc of the Sun. Sunspots are relatively cooler areas on the Sun that are observed as dark patches. Their number rises and falls with the level of activity of the Sun in a cycle of about 11 years. When the Sun is very active, large-scale phenomena take place, such as the flares and coronal mass ejections observed by the ESA/NASA solar observatory SOHO. These events release a large amount of energy and charged particles that hit the Earth and can cause powerful magnetic storms, affecting radio communications, power distribution lines and even our weather and climate. During the solar cycle, the X-ray emission from the Sun varies by a large amount (about a factor of 100) and is strongest when the cycle is at its peak and the surface of the Sun is covered by the largest number of spots. ESA's X-ray observatory, XMM-Newton, has now shown for the first time that this cyclic X-ray behaviour is common to
Scalable Newton-Krylov solver for very large power flow problems
Idema, R.; Lahaye, D.J.P.; Vuik, C.; Van der Sluis, L.
The power flow problem is generally solved by the Newton-Raphson method with a sparse direct solver for the linear system of equations in each iteration. While this works fine for small power flow problems, we will show that for very large problems the direct solver is very slow and we present
Particle-in-cell simulations of anomalous transport in a Penning discharge
Carlsson, Johan; Kaganovich, Igor; Powis, Andrew; Raitses, Yevgeny; Romadanov, Ivan; Smolyakov, Andrei
Electrostatic particle-in-cell simulations of a Penning discharge are performed in order to investigate azimuthally asymmetric, spoke-like structures previously observed in experiments. Two-dimensional simulations show that for Penning-discharge conditions, a persistent nonlinear spoke-like structure forms readily and rotates in the direction of E × B and electron diamagnetic drifts. The azimuthal velocity is within about a factor of 2 of the ion acoustic speed. The spoke frequency follows the experimentally observed scaling with ion mass, which indicates the importance of ion inertia in spoke formation. The spoke provides enhanced (anomalous) radial electron transport, and the effective cross-field conductivity is several times larger than the classical (collisional) value. The level of anomalous current obtained in the simulations is in good agreement with the experimental data. The rotating spoke channels most of the radial current, observable by an edge probe as short pulses.
Penning traps with unitary architecture for storage of highly charged ions.
Tan, Joseph N; Brewer, Samuel M; Guise, Nicholas D
Penning traps are made extremely compact by embedding rare-earth permanent magnets in the electrode structure. Axially-oriented NdFeB magnets are used in unitary architectures that couple the electric and magnetic components into an integrated structure. We have constructed a two-magnet Penning trap with radial access to enable the use of laser or atomic beams, as well as the collection of light. An experimental apparatus equipped with ion optics is installed at the NIST electron beam ion trap (EBIT) facility, constrained to fit within 1 meter at the end of a horizontal beamline for transporting highly charged ions. Highly charged ions of neon and argon, extracted with initial energies up to 4000 eV per unit charge, are captured and stored to study the confinement properties of a one-magnet trap and a two-magnet trap. Design considerations and some test results are discussed.
Penning traps with unitary architecture for storage of highly charged ions
Tan, Joseph N.; Guise, Nicholas D.; Brewer, Samuel M.
Characteristics of Handwriting of People With Cerebellar Ataxia: Three-Dimensional Movement Analysis of the Pen Tip, Finger, and Wrist.
Fujisawa, Yuhki; Okajima, Yasutomo
There are several functional tests for evaluating manual performance; however, quantitative manual tests for ataxia, especially those for evaluating handwriting, are limited. This study aimed to investigate the characteristics of cerebellar ataxia by analyzing handwriting, with a special emphasis on correlation between the movement of the pen tip and the movement of the finger or wrist. This was an observational study. Eleven people who were right-handed and had cerebellar ataxia and 17 people to serve as controls were recruited. The Scale for the Assessment and Rating of Ataxia was used to grade the severity of ataxia. Handwriting movements of both hands were analyzed. The time required for writing a character, the variability of individual handwriting, and the correlation between the movement of the pen tip and the movement of the finger or wrist were evaluated for participants with ataxia and control participants. The writing time was longer and the velocity profile and shape of the track of movement of the pen tip were more variable in participants with ataxia than in control participants. For participants with ataxia, the direction of movement of the pen tip deviated more from that of the finger or wrist, and the shape of the track of movement of the pen tip differed more from that of the finger or wrist. The severity of upper extremity ataxia measured with the Scale for the Assessment and Rating of Ataxia was mostly correlated with the variability parameters. Furthermore, it was correlated with the directional deviation of the trajectory of movement of the pen tip from that of the finger and with increased dissimilarity of the shapes of the tracks. The results may have been influenced by the scale and parameters used to measure movement. Ataxic handwriting with increased movement noise is characterized by irregular pen tip movements unconstrained by the finger or wrist. The severity of ataxia is correlated with these unconstrained movements. © 2015 American
Novel methods for improvement of a Penning ion source for neutron generator applications.
Sy, A; Ji, Q; Persaud, A; Waldmann, O; Schenkel, T
Penning ion source performance for neutron generator applications is characterized by the atomic ion fraction and beam current density, providing two paths by which source performance can be improved for increased neutron yields. We have fabricated a Penning ion source to investigate novel methods for improving source performance, including optimization of wall materials and electrode geometry, advanced magnetic confinement, and integration of field emitter arrays for electron injection. Effects of several electrode geometries on discharge characteristics and extracted ion current were studied. Additional magnetic confinement resulted in a factor of two increase in beam current density. First results indicate unchanged proton fraction and increased beam current density due to electron injection from carbon nanofiber arrays.
Discrimination of Black Ball-point Pen Inks by High Performance Liquid Chromatography (HPLC)
Mohamed Izzharif Abdul Halim; Norashikin Saim; Rozita Osman; Halila Jasmani; Nurul Nadhirah Zainal Abidin
In this study, thirteen types of black ball-point pen inks of three major brands were analyzed using high performance liquid chromatography (HPLC). Separation of the ink components was achieved using Bondapak C-18 column with gradient elution using water, ethanol and ethyl acetate. The chromatographic data obtained at wavelength 254.8 nm was analyzed using agglomerative hierarchical clustering (AHC) and principle component analysis (PCA). AHC was able to group the inks into three clusters. This result was supported by PCA, whereby distinct separation of the three different brands was achieved. Therefore, HPLC in combination with chemometric methods may be a valuable tool for the analysis of black ball-point pen inks for forensic purposes. (author)
Application of micro-attenuated total reflectance Fourier transform infrared spectroscopy to ink examination in signatures written with ballpoint pen on questioned documents.
Nam, Yun Sik; Park, Jin Sook; Lee, Yeonhee; Lee, Kang-Bong
Questioned documents examined in a forensic laboratory sometimes contain signatures written with ballpoint pen inks; these signatures were examined to assess the feasibility of micro-attenuated total reflectance (ATR) Fourier transform infrared (FTIR) spectroscopy as a forensic tool. Micro-ATR FTIR spectra for signatures written with 63 ballpoint pens available commercially in Korea were obtained and used to construct an FTIR spectral database. A library-searching program was utilized to identify the manufacturer, blend, and model of each black ballpoint pen ink based upon their FTIR peak intensities, positions, and patterns in the spectral database. This FTIR technique was also successfully used in determining the sequence of homogeneous line intersections from the crossing lines of two ballpoint pen signatures. We have demonstrated with a set of sample documents that micro-ATR FTIR is a viable nondestructive analytical method that can be used to identify the origin of the ballpoint pen ink used to mark signatures. © 2014 American Academy of Forensic Sciences.
Isaac newton et la gravitation universelle un scientifique au tempérament rageur
Mettra, Pierre
Découvrez enfin tout ce qu'il faut savoir sur Newton et la théorie de la gravitation universelle en moins d'une heure ! Figure incontournable de l'histoire des sciences, Isaac Newton bouleverse le monde avec sa théorie de la gravitation universelle. Secrètement passionné d'alchimie, il fait accomplir à l'optique et à l'analyse mathématique d'incroyables progrès, devenant aux yeux de ses contemporains l'un des savants les plus novateurs et les plus respectés au monde, ce que la postérité ne démentira pas.Ce livre vous permettra d'en savoir plus sur : La vie de New
The early behaviour of cow and calf in an individual calving pen
Jensen, Margit Bak
The aim was to investigate the early behaviour in dairy cows and their calves. Thirty-eight multiparous Danish Holstein Frisian cows and their calves were housed in individual calving pens during the first twelve days post-partum and their behaviour was observed during 24 h on days 3, 7 and 11....... Cows gradually reduced the time spent sniffing and licking their calves from 59 to 49 min over the days studied (P cow from less than half a minute on days 3 and 7 to 1 min on day 11 (P ... studied (P cows' behavioural priorities, the cows were tested on either day 4, 8 or 12 after calving by removing them from their pens during 3 h and subsequently reintroducing them. Behavioural observations during 3 h after reintroduction showed...
Particle-In-Cell simulations of the Ball-pen probe
Komm, M.; Adámek, Jiří; Pekárek, Z.; Pánek, Radomír
Ro�. 50, �. 9 (2010), s. 814-818 ISSN 0863-1042 R&D Projects: GA AV ČR KJB100430901 Institutional research plan: CEZ:AV0Z20430508 Keywords : Ball- pen * tokamak * plasma * plasma potential * PIC * simulation * I-V characteristics Subject RIV: BL - Plasma and Gas Discharge Physics Impact factor: 1.006, year: 2010 http://onlinelibrary.wiley.com/doi/10.1002/ctpp.201010137/pdf
Non-relativistic spinning particle in a Newton-Cartan background
Barducci, Andrea; Casalbuoni, Roberto; Gomis, Joaquim
We construct the action of a non-relativistic spinning particle moving in a general torsionless Newton-Cartan background. The particle does not follow the geodesic equations, instead the motion is governed by the non-relativistic analog of Papapetrou equation. The spinning particle is described in terms of Grassmann variables. In the flat case the action is invariant under the non-relativistic analog of space-time vector supersymmetry.
Orobanche flava Mart. ex F.W. Schultz (Orobanchaceae: en la Península Ibérica
Pujadas Salvá, Antonio J.
Full Text Available Orobanche flava is reported in the N of the Iberian Península. Its diversity and distribution is analyzed for the península: var. flava in the Pyrenees and var. albicans Rhiner in the Cantabrian Mountain chain. Lectotypification of O. flava var. albicans Rhiner [ZT] is proposed. To facilítate the identification of O. flava we emphasize its differential morphological characters and an original icon is contributed.Se indica la presencia de Orobanche flava en el N de la Península Ibérica. Se analiza su diversidad y su distribución en el territorio: var. flava en el Pirineo y var. albicans Rhiner en la Cordillera Cantábrica. Se propone el lectótipo de O. flava var. albicans Rhiner [ZT]. Para facilitar la identificación de O. flava destacamos sus caracteres morfológicos diferenciales y aportamos un icono original.
Carcass traits and meat quality of growing rabbits in pens with and without different multilevel platforms
M. Martino
Full Text Available The aim of this trial was to determine the effect of the presence of wire or plastic mesh elevated platforms on carcass traits and meat quality characteristics, with particular attention to the oxidative status of growing rabbits. A total of 174 five-week old rabbits were randomly divided into 3 groups with 2 replications (6 pens; 29 rabbits/pen: pens without platforms (NoP with a stocking density of 16 rabbits/m2 and pens with wire-mesh platforms (WP or plastic-mesh platforms (PP that were placed on 2 levels, with a stocking density of 16 rabbits/m2 on the floor or 9.14 rabbits/m2 when the platform were included. At 84 d rabbits were slaughtered. The slaughter traits and Longissimus lumborum (LL physical and chemical compositition were not affected by treatments. Rabbits from the PP group showed the highest retinol and γ-tocotrienol content on LL muscle, whereas the NoP ones showed a higher α-tocotrienol and α-tocopherol level. The absence of platforms led to decreased (P<0.001 thiobarbituric acid-reactive substances values and induced an improvement in n-3 polyunsaturated fatty acids. Levels of linoleic, linolenic and docosahexaenoic acids were equal to those of the WP group (23.45, 3.75, 0.64% in NoP and 22.6, 4.14, 0.53% in WP, respectively but higher than in PP rabbits (20.86, 3.05, 0.45%, respectively. It can be concluded that the pens with elevated platforms provide greater possibilities for movement, which is beneficial from the viewpoint of animal welfare. However, this greater activity influences the oxidative status of the meat, decreasing the antioxidant content and worsening the lipid oxidation of rabbit meat.
ASCA and XMM-Newton observations of the galactic supernova remnant G311.5−0.3
Pannuti T.G.
Full Text Available We present an analysis of X-ray observations made with ASCA and XMM-Newton of the Galactic supernova remnant (SNR G311.5−0.3. Prior infrared and radio observations of this SNR have revealed a shell-like morphology at both wavelengths. The spectral index of the radio emission is consistent with synchrotron emission, while the infrared colors are consistent with emission from shocked molecular hydrogen. Also previous CO observations have indicated an interaction between G311.5−0.3 and an adjacent molecular cloud. Our previous analysis of the pointed ASCA observation made of this SNR detected X-ray emission from the source for the first time but lacked the sensitivity and the angular resolution to rigorously investigate its X-ray properties. We have analyzed an archival XMM-Newton observation that included G311.5−0.3 in the field of view: this is the first time that XMM-Newton data has been used to probe the X-ray properties of this SNR. The XMM-Newton observation confirms that the X-ray emission from G311.5−0.3 is centrally concentrated and supports the classification of this source as a mixed-morphology SNR. In addition, our joint fitting of extracted ASCA and XMM-Newton spectra favor a thermal origin for the X-ray emission over a non-thermal origin. The spectral fitting parameters for our TBABS×APEC fit to the extracted spectra are NH = 4.63+1.87 −0.85×1022 cm −2 and kT = 0.68+0.20−0.24 keV. From these fit parameters, we derive the following values for physical parameters of the SNR: ne = 0.20 cm −3, np = 0.17 cm −3, MX = 21.4 M· and P/k = 3.18×106 K cm −3.
|
CommonCrawl
|
The influence of fatigue and chronic low back pain on muscle recruitment patterns following an unexpected external perturbation
Júlia Jubany1,2,
Lieven Danneels3 &
Rosa Angulo-Barroso1,4
BMC Musculoskeletal Disorders volume 18, Article number: 161 (2017) Cite this article
Chronic low back pain (CLBP) has been associated with altered trunk muscle responses as well as increased muscle fatigability. CLBP patients and fatigued healthy subjects could experience similar neuromuscular strategies to attempt to protect the spine. The current study examined muscle activation differences between healthy and CLBP subjects following a perturbation. In addition, the possible role of muscle fatigue was evaluated by investigating the healthy control subjects in a non-fatigued and a fatigued condition. Both experiments were combined to evaluate possible similar strategies between CLBP and fatigued samples.
Cross-sectional study where 24 CLBP subjects and 26 healthy subjects were evaluated. Both groups (CLBP vs. healthy) and both conditions (non-fatigued and a fatigued condition) were evaluated while a weight was suddenly dropped on a held tray. Erector spinae, multifidus, obliques and biceps brachii were recorded using surface electromyography. Variables describing the bursts timing and variables describing the amount of muscle activity (number of bursts and amplitude increase) post impact were studied. The analysis between groups and conditions was carried out using ANOVAs with repeated measurements for the muscle factor.
CLBP subjects reacted similarly to healthy subjects regarding muscle activity post impact. However, the CLBP group showed temporal characteristics of muscle activity that were in between the fatigued and non-fatigued healthy group. Clear differences in muscle activity were displayed for healthy subjects. Fatigued healthy subjects presented more reduced activity after impact (upper limb and trunk muscles) than non-fatigued healthy subjects and different temporal characteristic in the same way than CLBP patients. This same temporal characteristic with CLBP and healthy fatigued people was a delay of the first burst of muscle activity after impact.
Though similar muscle pattern existed between CLBP and healthy people, CLBP temporal characteristics of muscle activity showed a pattern in between healthy people and fatigued healthy people. While the temporal muscle pattern dysfunction used by CLBP subjects could be related to maladaptive patterns, temporal and muscle activity characteristics used by healthy fatigued people may lead to back injuries.
Chronic low back pain (CLBP) is a multifactorial syndrome that represents a major problem throughout the world [1]. In recent years, trunk neuromuscular deficiency has been associated with low back pain: delays in activation (larger muscle activation latencies) [2, 3] and higher levels of muscle activation and trunk muscle co-contraction [3, 4]. These deficits have been related as a goal to protect from further pain, injury, or both, but in the same way as a possible source of further problems in the long term and have been suggested to contribute to CLBP [5]. For example, greater co-contraction strategies have been associated with the attempt to protect the spine, despite the possible overload of spine compression that results [3, 6]. Despite the strength of some of these hypotheses and findings, some studies show that the lack of differences in trunk muscles between CLBP and healthy subjects [7] could indicate that we do not fully understand all the variables that could influence the detection of dysfunctions.
Muscle activation levels and muscle activation latencies have been studied not only in back pain, but also in fatigue. When fatigued healthy individuals were exposed to a sudden perturbation, some studies demonstrated increases in the electromyographic (EMG) amplitude as a strategy to compensate for the loss of force production [8]; others demonstrated lower trunk muscle co-contraction compared to non-fatigued people, which was associated to spinal stability vulnerability [9]; and others found longer activation latencies as a deterioration of responsiveness and precision of the neuromuscular spindle system [10].
Most studies about muscle onset timing following an external perturbation in CLBP individuals or fatigued subjects were only performed by evaluating the latency or amplitude of the onset, without analysing the rest of the muscle responses. To know the whole muscle behaviour throughout the time could imply some important clinical considerations regarding treatment or prevention interventions in those populations. Moreover, to the authors' knowledge, there are no studies in which measurements of muscle reactions following a perturbation are compared between CLBP patients and healthy controls and in which possible differences are compared with what happens when the healthy population is fatigued. CLBP patients and fatigued healthy subjects could experience similar neuromuscular strategies to attempt to protect the spine. Therefore, the current study had three objectives: a) to evaluate differences between healthy subjects and those suffering from CLBP in the sequence and amount of EMG muscle activity that occur after a perturbation during a functional position; b) to evaluate in an analogue way the effect of fatigue in healthy subjects and c) to evaluate similar compensatory strategies which might be used by both CLBP and fatigued subjects.
Twenty-four subjects with CLBP and 26 healthy subjects were recruited (25–55 years old). The inclusion criteria for CLBP subjects were constant or nearly constant pain in the lower back for over a year with painful periods of at least 7 on the numeric rating scale (NRS) (segmented numeric version of 100-mm Visual Analog Scale with 0–10 integers). Those subjects who had other health problems that could affect recorded or outcome data were excluded. Those individuals who at the time of data collection were suffering pain with more than 4 on NRS were asked to come back on a later occasion. Subjects with CLBP were recruited from the Althaia Foundation (Spain) and healthy subjects were recruited from Manresa University (Spain) matched with CLBP subjects in age, gender, height and body mass (Table 1). Each subject signed an informed consent form. The project was approved by the local Ethics Committee.
Table 1 Group sample description and significance values of the t-test and chi square test
Three equal sessions on three different days were designed for each subject to collect the external perturbation test data (EPT) (Figs. 1 and 2). Anthropometric measures necessary for calculating the weight applied in the EPT were collected at the beginning of each testing session. Finally, six attempts of the EPT were carried out separated by a 30-s interval. Subjects were asked to stand in a bipedal semi-squat position holding a tray with both hands in front of the instrument that would release the load (Fig. 1). They had to stand in such a way that their acromion reached an elevation equal to 94% of individual stature. The 94% elevation was chosen using visual inspection to reproduce the semi-squat posture. At the sound of a buzzer, at a random interval of two to ten seconds, a weight was dropped on the tray without warning and without being seen, causing a sudden perturbation in the flexion direction. The load applied was a weight released from 15 cm above the tray and corresponded to 3.5% of the predicted maximum extensor moment (PMEM) of each individual. The trunk PMEM was calculated according to gender described previously [11]:
External perturbation test
Temporal representation of the average value of the bursts of each muscle (biceps brachii (BB) thoracic spinal erector (SE) right multifidus (RM), left multifidus (LM), external oblique (EO), internal oblique (IO)) for both groups and for the condition of fatigue. a Group without lumbar pathology (H); b Group with nonspecific chronic low back pain (CLBP); c Condition of fatigue (With-F). The coloured bars represent, for different muscles, the moment when the first and second bursts after impact starts and their duration. The striped areas represent the parts of the bursts belonging to two different bursts (overlapping)
$$ \mathrm{Women}:\ \mathrm{PMEM} = 6.506 \ast \mathrm{FFBM}\ \hbox{-}\ 47.2 $$
$$ \mathrm{Men}:\ \mathrm{PMEM} = 9.227 \ast \mathrm{FFBM}\ \hbox{-}\ 172.9 $$
whereby FFBM is the fat-free body mass estimated according to the method of Durnin and Womersley [12].
$$ \mathrm{Density} = \mathrm{c}\hbox{-} \mathrm{m} \ast \log\ \mathrm{skinfold}\ \left(\mathrm{triceps} + \mathrm{biceps} + \mathrm{subscapular} + \mathrm{supra}\hbox{-} \mathrm{iliac}\right)\ \%\ \mathrm{fat}=\left(4.95/\mathrm{density}\ \hbox{-}\ 4.50\right)*\ 100 $$
$$ \mathrm{FFBM} = \mathrm{weight} \ast \left(100\ \hbox{-}\ \%\mathrm{fat}\right)/100\Big) $$
A fatigue protocol was added only in the second session and only for the healthy group after the sixth EPT. This consisted of maintaining a weight corresponding to 40% of the PMEM in the same position as long as they could. After the fatigue protocol, another six EPT with 30–40 s delay between fatigue and EPT attempts was performed. The healthy group without fatigue was considered as condition Non-F and the same group after the fatigue protocol was considered as condition With-F.
EMG analysis and data processing
A ME6000 electromyography system (Mega Electronics, Kuopio, Finland) was used to register the EMG signals. EMG recordings were conducted during all the MVC efforts and for all the EPT attempts. Right thoracic spinal erector (SE), right multifidus (MR), left multifidus (LM), right biceps brachii (BB), right external oblique (EO) and right internal oblique (IO) were recorded. Adhesive surface electrodes (Ambu-Blue-Sensor, M-00-S, Denmark), were placed 2-cm apart according to anatomical recommendation of the SENIAM [13] except for IO [14]. The skin was prepared according to SENIAM specifications [13]. The EMG data were collected at 2000 Hz and were amplified with a gain of 1000 using an analogue differential amplifier and a common mode rejection ratio of 110 dB. The input impedance was 10 GΩ. A Butterworth band pass filter of 8–500 Hz (-3 dB points) was used.
An accelerometer (measuring range 10G) located at the bottom of the tray and synchronised with EMG was the indicator of the weight drop. The algorithm used was based on averages and standard deviations of the EMG record baseline amplitude, designed and validated by our group. A validated algorithm [15], determining the points where the amplitude of the EMG record increases compared to the baseline record and decreases to return to the same baseline, was used to identify EMG bursts. The bursts that were separated by a period of less than 15 ms were considered as a single burst and bursts lasting less than 15 ms were not taken into account.
Two categories of variables were analysed: 1) variables describing the timing including start and duration of the first and second burst and duration of co-contraction between the main trunk muscles after the impact. The duration of co-contraction was considered as the milliseconds during SE-OE, SE-IO or EO-IO generated a burst in the same period; and 2) variables describing the amount of muscle activity post impact. The amount of activity was analysed based on the number of bursts after the impact and the amplitude increase after the impact (ratio of root mean square (RMS) of the post impact EMG signal amplitude and RMS of the pre-impact EMG signal amplitude). The RMS was determined during the interval 500 milliseconds before and after impact. For all variables, the median of all attempts (18 attempts for each individual in the comparison between CLBP vs. healthy and 6 attempts for the condition With-F vs. Non-F) was calculated as a representative value of each individual [16].
Demographic differences between groups were studied using independent t-tests for parametric variables and a chi-square for non-parametric variables. The analysis between groups (CLBP vs. healthy) was carried out for each variable using mixed group by muscle ANOVAs with repeated measurements for the muscle factor [16, 17]. Post hoc Tukey corrections were performed to analyse the muscle factor significance. Simple factor analysis was used to further analyse a significant group by muscle interaction, and finally post hoc Bonferroni corrections were performed to analyse the significance between muscles when the group was fixed.
For the analysis between conditions (With-F vs. Non-F) the same statistical procedures were used for the analysis between groups, with the exception that the condition factor (Non-F, With-F) was treated in all cases as repeated measurements. In all the variance analyses the value of eta partial square was considered as an estimate of the size of the effect. In the results section, all significant values (<0.05) have been reported.
The CLBP and healthy groups showed no differences in their baseline demographic and anthropometric characteristics. Similar weights were applied during the test between groups (Table 1). The specific pain history (intensity and duration) of the CLBP subjects during the previous year to data collection can be found in Table 2. As a summary of the variables describing the timing, Fig. 2a, b and c show graphically the average of these variables for the CLBP, healthy group and fatigue condition.
Table 2 Pain history, intensity and duration, of the group with CLBP
Mean and standard deviation (error bars) of each muscle (biceps brachii (BB), thoracic spinal erector (SE), right multifidus (RM), left multifidus (LM), external oblique (EO), internal oblique (IO)) of variables describing the timing and variables describing the amount of muscle activity post impact between healthy subjects and those with chronic low back pain and between healthy subjects without fatigue and healthy subjects with fatigue
Regarding differences between groups (CLBP vs healthy), statistical analysis of the variables describing the timing and the quantitative variables of muscle activity (number of bursts following the impact and the increase of activity after the impact) showed no differences between CLBP and healthy subjects (Table 3 and Fig. 3).
Table 3 Comparison between healthy and chronic low back pain subjects
Regarding differences between conditions (With-F vs Non-F), statistical analysis of the variables describing the timing showed a significant delay of the first burst for the With-F condition when compared to the Non-F condition (Table 4 and Fig. 3). Similarly, the With-F condition showed a significantly shorter length of the BB first burst when compared to the Non-F condition (Table 4 and Fig. 3). On the other hand, the co-contraction levels showed significant differences between the two conditions with lower values in the With-F compared to the Non-F condition (Table 4 and Fig. 3). In the analysis of the quantitative variables, the condition With-F compared to Non-F showed a smaller number of bursts after impact and lower values of increase of activity after the impact; although the presence of significant interaction (P < 0.001) showed that lower values of increase of activity after the impact in the With-F condition was only for the SE and BB muscles (Table 4 and Fig. 3).
Table 4 Comparison between non-fatigued healthy subjects and fatigued healthy subjects
Although not statistically significant, the CLBP showed a pattern of response for first burst delay closer to fatigued than non-fatigued healthy subjects (Fig. 4)
Mean and standard deviation (error bars) of each muscle (biceps brachii (BB), thoracic spinal erector (SE), right multifidus (RM), left multifidus (LM), external oblique (EO), internal oblique (IO)) of the variable of first burst post impact between healthy subjects, those with chronic low back pain and between healthy subjects with fatigue
The present study shows that people with CLBP used similar muscle pattern compared to the healthy group when reacting to an unexpected external load. On the other hand, fatigue caused muscle pattern changes regarding temporal and amount of muscle activity characteristics that were different when compared to those of the non-fatigued group. Although not statistically significant, the CLBP showed a pattern of response for some temporal variables closer to fatigued than non-fatigues healthy subjects (Fig. 4). This similarity may indicate that some behavioural characteristics could be shared between CLBP and healthy fatigued groups. Other studies have also shown similar characteristics when comparing these two groups, e.g. proprioception alteration [18, 19]. The fact that the common temporal changes are less obvious in subjects with CLBP could be caused by the greater heterogeneity in characteristics presented by this group. However, it should be taken into account that the muscular alteration in individuals with CLBP is maintained over time, unlike the With-F group who only has this condition on a temporary basis.
Greater latency in the CLBP group in the activation of the first burst for most muscles has been determined by most authors [2, 3, 20]. However, in this study, no significant differences between CLBP and healthy groups were found in the first muscular activation delay after an external perturbation. Although not statistically significant, the CLBP showed a pattern of response of the first burst latency closer to fatigued than non-fatigued healthy subjects, with the fatigued group showing a clear delay in the first burst activation (Fig. 4). This tendency could support current theories [16, 21, 22] which describe delays in muscle activation as a phenomenon that decreases the control of the spine possibly leading to chronic pain. The lack of findings in this study and other authors concerning this issue [7] could be explained by different factors. The small sample size and large variability among CLBP subjects [5] could contribute to diminished power. Future studies with a larger sample size and/or sub-classifications of CLBP would be required to clarify this issue. In addition, the specific characteristics of the test used in the different studies could also contribute to limited detection of greater latencies in CLBP as described by others [2, 3, 20]. Controlling the pre-activation trunk muscles and using a more fixed position [7] could imply an experimental condition where CLBP muscle deficits were not observed. In healthy people, abdominal and trunk muscle pre-activation had showed increases in spinal stiffness and stability [6]. Stable contexts could not be those situations where CLBP subjects present deficits in spine control. Moreover, one may interpret that the more real the position of the test (this study vs. others [7]), the more it relates to CLBP subjects' daily lives. The semi-squat position, used in the current study, is a recommended posture to handle physical efforts made at the spine level, and it is frequently used without external stabilization. Also, unexpected perturbations could be experienced in daily activities. Less ability to protect the spine by CLBP in this frequent situation seems more relevant than results from an unusual context.
Subsequent muscular reactions (EMG bursts) also seem to be important to guarantee spinal protection. In the current study, the CLBP group showed no differences when compared to the healthy group without fatigue. The current results can only be compared with other studies investigating the first reaction response since, to the authors' knowledge, other research up to now, does not consider subsequent reactions. Only one study evaluated the completion time of the first burst without observing differences between the CLBP group and the healthy group in that parameter [17]. The variability among subjects in the motor recruitment pattern [5] could be greater in subsequent muscular reactions rather than in the first reaction. This could be the reason for the lack of significance among the subsequent muscle reactions found in this study or other ones.
CLBP showed similar amount of activity than healthy group after the impact as well as similar co-contraction based on the calculation of the burst synchronisation after impact in the three muscle groups. However, in different tasks, other studies found in CLBP people more muscle activation in certain muscles [7] as well as increased agonist and antagonist activity attributing to increased muscle co-contraction [4]. Both variables of the current study that describe the amount of muscle activity post impact (burst number and amplitude increase) were relative of pre-impact activity without considering the possible absolute group differences on the EMG normalized amplitude. That could be a reason explaining why other studies show discrepancies regarding muscular activity in CLBP subjects [4, 7]. Interpreting together those results of the amount of muscle activity, one may conclude that CLBP subjects may be using more activity than healthy ones but not a large increase of muscular activity after the external perturbation. Regarding the co-contraction parameter, this study cannot be compared directly with those studies where co-contraction was calculated as an amplitude increment of agonist and antagonist muscles [4]. The co-contraction parameter of this study is based on muscle onset and offset times so similar co-contraction between groups is because no more burst synchronisation is found in CLBP group compared with the healthy one. The co-contraction parameter of this study may be contrasted with studies like Mehta et al. [17], Radebold [3] and Cholewicki [23]. Mehta et al [17] analysed the coincidence in time of the first burst on a sudden perturbation showing a lower synchronisation in CLBP subjects. Conversely, Radebold [3] and Cholewicki [23] determined that greater co-contraction occurred in the CLBP group, as they observed less muscleagonist deactivation once the load was withdrawn when compared to the healthy group. Considering all CLBP evidence together, one might conclude that the increase of muscle activity and the presence of co-contraction are strategies used by people with CLBP to reduce pain [5], but are not present in all types of tasks. It appears that in those tasks that require a sudden increase in muscle activity to control the spine (this study among others [17]), the delay in muscle activation could make synchronisation impossible which undermines the possibility of co-contraction. Conversely, in slower tasks [4] or in tasks with an initial considerable co-contraction [3, 23], increased slowness in muscular reaction would not prevent the co-contraction strategy. Similarly, even though absolute EMG normalized amplitude differences between groups are not assessed in this study, muscle activity increase as a strategy by CLBP subjects to reduce pain [5] could be more difficult presented after sudden perturbation than in a static position [24] or when undertaking a slower task [4]. A recent new theory regarding adaptation to pain supports this task dependency interpretation [5]. In sudden perturbation, delays in muscle activation and the difficulty of using previous described strategies to control the spine [3, 6] could imply a vulnerability for CLBP subjects and may play a role in the chronification process. Muscle training to improve muscle coordination and the quickness of muscle responses could be a strategy to improve CLBP dysfunctions. Moreover, one might consider the need for functional exercises as a treatment of CLBP (semi-squat among others).
Regarding the fatigue condition, greater latencies in the activation of the first burst, and alteration in subsequent reaction times existed (earlier times in the deactivation of the first burst of the BB muscle and less co-contraction of SE, EO and IO). Healthy subjects seem to show in the fatigue condition a similar phenomenon that decreases the control of the spine as that described for temporal alteration in CLBP [16, 21, 22]. In this transitory situation of fatigue, the lack of control seems to be much more pronounced than in CLBP subjects and could lead to a tissue injury. Some authors found similar results to the current study regarding the first onset [10], while others have not observed latency differences in the first burst [8, 25]. Again, literature discrepancies could be explained by the specific characteristics of the perturbation, different fatigue levels achieved prior to the reaction test and the task used to induce fatigue [26]. Future studies analysing the same subjects with different tasks and methodologies could help to resolve this issue. Contrary to the CLBP group, the fatigue condition showed signs of reduced activity after the impact compared with non-fatigue condition (smaller amount of bursts and smaller increase in muscle activation after impact of some muscles). This reduction may indicate that fatigue leads to a lesser control of the spine overloading on different structures and is more likely to result in back injury. Some considerations must be taken against fatigue to prevent spine overloading.
CLBP subjects constitute a very heterogeneous and multifactorial group. Even though the sample size in the present study could be larger, we have used a similar or larger sample size than those currently used in similar research studies [8, 16]. For this reason results should be interpreted cautionarily and considered as exploratory. Moreover, we must assume that the current experimental findings could be extrapolated only to people with similar demographic and anthropometric characteristics (age, body mass, etc.). It should be noted that this study has evaluated a static and very specific task that is not representative of the multiple dynamic tasks that are carried out by the individuals in their daily lives. Another limitation is the absence of the abdominal transverse or other task-contributor muscles, which may mean that we have overlooked muscles with a role in the muscle recruitment patterns and that could present different behaviour between groups (CLBP vs. healthy) or between conditions (With-F vs. Non-F). Moreover, the fact that we did not conduct a complete bilateral assessment could mask some dysfunctional muscle behaviours related to CLBP or With-F groups.
Finally, two important aspects must be considered in the interpretation of temporal electromyographic data: a) the difficulty determining the bursts' onset. Despite the use of an algorithm designed specifically for this purpose, greater initial muscle activity can lead to greater variability in the determination of the onset and b) it must be noted that the actual muscle contraction itself is not being recorded, as the EMG signal represents the muscle's electrical activity. The type of muscle fibres, the electrode's distance to the innervation zone centre, for example, may entail certain differences between the electrical activity records of the different muscles and the delay that exists in the final contraction [27].
When controlling the trunk after an unexpected external perturbation, CLBP subjects seemed to react similarly to healthy subjects regarding muscle activity post impact. However, the CLBP group showed temporal characteristics of muscle activity that were in between the non-fatigued and fatigued healthy groups. Clear differences in muscle activity were displayed by the healthy subjects in the same situation. Fatigued subjects used different muscle patterns when compared to healthy subjects without fatigue. They reacted with greater muscle latencies in the activation of the first burst among some other temporal characteristics. In addition, they presented more reduced muscle activity after impact than healthy subjects. A temporal characteristic between CLBP and healthy fatigued people was a delay of the first burst of muscle activity after impact. We suggest that these muscle patterns present in CLBP and healthy fatigued subjects and especially in sudden perturbations could imply a vulnerability and may play a role in CLBP dysfunction or may lead to back injuries in fatigued people.
Right biceps brachii
CLBP:
EMG:
Electromyographic
EO:
Right external oblique
EPT:
External perturbation test data
FFBM:
Fat-free body mass
IO:
Right internal oblique
LM:
Left multifidus
Right multifidus
Non-F:
The healthy group without fatigue
NRS:
Numeric rating scale
PMEM:
Predicted maximum extensor moment
RMS:
Root mean square
SE:
Right thoracic spinal erector
With-F:
The healthy group after the fatigue protocol
Hoy D, Bain C, Williams G, March L, Brooks P, Blyth F, et al. A systematic review of the global prevalence of low back pain. Arthritis Rheum. 2012;64:2028–37.
Hodges PWP. Changes in motor planning of feedforward postural responses of the trunk muscles in low back pain. Exp Brain Res. 2001;141:261–6. doi:10.1007/s002210100873.
Radebold A, Cholewicki J, Panjabi MM, Patel TC. Muscle response pattern to sudden trunk loading in healthy individuals and in patients with chronic low back pain. Spine (Phila Pa 1976). 2000;25:947–54.
D'hooge R, Hodges P, Tsao H, Hall L, Macdonald D, Danneels L. Altered trunk muscle coordination during rapid trunk flexion in people in remission of recurrent low back pain. J Electromyogr Kinesiol. 2013;23:173–81. doi:10.1016/j.jelekin.2012.09.003.
Hodges PW, Tucker K. Moving differently in pain: a new theory to explain the adaptation to pain. Pain. 2011;152:S90–8. doi:10.1016/j.pain.2010.10.020.
Vera-Garcia FJ, Brown SHM, Gray JR, McGill SM. Effects of different levels of torso coactivation on trunk muscular and kinematic responses to posteriorly applied sudden loads. Clin Biomech. 2006;21:443–55. doi:10.1016/j.clinbiomech.2005.12.006.
Larivière C, Forget R, Vadeboncoeur R, Bilodeau M, Mecheri H, Larivière C, et al. The effect of sex and chronic low back pain on back muscle reflex responses. Eur J Appl Physiol. 2010;109:577–90. doi:10.1007/s00421-010-1389-7.
Dupeyron A, Perrey S, Micallef JP, Pélissier J. Influence of back muscle fatigue on lumbar reflex adaptation during sudden external force perturbations. J Electromyogr Kinesiol. 2010;20:426–32. doi:10.1016/j.jelekin.2009.05.004.
Chow DH, Man JW, Holmes AD, Evans JH. Postural and trunk muscle response to sudden release during stoop lifting tasks before and after fatigue of the trunk erector muscles. Ergonomics. 2004;47:607–24. doi:10.1080/0014013031000151659.
Wilder DG, Aleksiev AR, Magnusson ML, Pope MH, Spratt KF, Goel VK. Muscular response to sudden load: A tool to evaluate fatigue and rehabilitation. Spine (Phila Pa 1976). 1996;21:2628–39.
Mannion AF, Adams MA, Cooper RG, Dolan P. Prediction of maximal back muscle strength from indices of body mass and fat-free body mass. Rheumatology. 1999;38:652–5. doi:10.1093/rheumatology/38.7.652.
Durnin JV, Womersley J. Body fat assessed from total body density and its estimation from skinfold thickness: measurements on 481 men and women aged from 16 to 72 years. Br J Nutr. 1974;32:77–97. doi:10.1079/BJN19740060.
Hermens HJ, Freriks B, Disselhorst-Klug C, Rau G. Development of recommendations for SEMG sensors and sensor placement procedures. J Electromyogr Kinesiol. 2000;10:361–74.
Vera-Garcia FJ, Moreside JM, McGill SM. MVC techniques to normalize trunk muscle EMG in healthy women. J Electromyogr Kinesiol. 2010;20:10–6. doi:10.1016/j.jelekin.2009.03.010.
Jubany J, Angulo-Barroso R. An algorithm for detecting EMG onset/offset in trunk muscles during a reaction- stabilization test. J Back Musculoskelet Rehabil. 2015;1:1–12. doi:10.3233/BMR-150617.
Liebetrau A, Puta C, Anders C, de Lussanet MHE, Wagner H. Influence of delayed muscle reflexes on spinal stability. Model-based predictions allow alternative interpretations of experimental data. Hum Mov Sci. 2013;32:954–70. doi:10.1016/j.humov.2013.03.006.
Mehta R, Cannella M, Smith SS, Silfies SP. Altered Trunk Motor Planning in Patients with Nonspecific Low Back Pain. J Mot Behav. 2010;42:135–44. doi:10.1080/00222891003612789.
Boucher JA, Abboud J, Descarreaux M. The influence of acute back muscle fatigue and fatigue recovery on trunk sensorimotor control. J Manip Physiol Ther. 2012;35:662–8.
Gill KP, Callaghan MJ. The measurement of lumbar proprioception in individuals with and without low back pain. Spine (Phila Pa 1976). 1998;23:371–7.
Shenoy S, Balachander H, Sandhu JS. Long latency reflex response of superficial trunk musculature in athletes with chronic low back pain. J Back Musculoskelet Rehabil. 2013;26:445–50.
Hodges P, van den Hoorn W, Dawson A, Cholewicki J. Changes in the mechanical properties of the trunk in low back pain may be associated with recurrence. J Biomech. 2009;42:61–6. doi:10.1016/j.jbiomech.2008.10.001.
Panjabi MM. A hypothesis of chronic back pain: ligament subfailure injuries lead to muscle control dysfunction. Eur Spine J. 2006;15:668–76. doi:10.1007/s00586-005-0925-3.
Cholewicki J, Greene HS, Polzhofer GK, Galloway MT, Shah RA, Radebold A. Neuromuscular function in athletes following recovery from a recent acute low back injury. J Orthop Sport Phys Ther. 2002;32:568–75.
Kumar S, Prasad N. Torso muscle EMG profile differences between patients of back pain and control. Clin Biomech. 2010;25:103–9. doi:10.1016/j.clinbiomech.2009.10.013.
Granata KP, Slota GP, Wilson SE. Influence of fatigue in neuromuscular control of spinal stability. Hum Factors. 2004;46:81–91. doi:10.1016/j.biotechadv.2011.08.021.Secreted.
Enoka RM, Stuart DG. Neurobiology of muscle fatigue. J Appl Physiol. 1992;72:1631–48. 0161-7567/92.
De Luca CJ. The use of surface electromyography in biomechanics. J Appl Biomech. 1997;13:135–63.
We would like to thank Professor Josep Molina Sallent for his valuable support in setting up the software presented in this paper. We would also like to thank all subjects who participated in this study.
This work was supported in part (economic support) by grants from the Catalonia and Baleares Medic and Health Scientific Academy (Catalano-Balear Society of Physiotherapy) and both full affiliations (Institut Nacional d'Activitat Física de Catalunya Barcelona and Faculty of Health Sciences at Manresa, University of Vic-Central University of Catalonia).
The datasets that are used and analyzed for the present study are available from the corresponding author upon reasonable request.
RA contributed during all the process giving directions and advice on the design, analysis, interpretation and writing of the manuscript. LD was a contributor in data interpretation and writing the manuscript. JJ was the major contributor in all the study parts: design, data collection, analysis, interpretation and writing of the manuscript. All authors read and approved the final manuscript.
Consent for publication was obtained from the person shown in Fig. 1.
The project was approved by the Ethics Committee of the Catalan Sports Administration and the Ethics Committee of "Fundació Unió Catalana d'Hospitals".
Institut Nacional d'Educació Física de Catalunya, (INEFC), University of Barcelona, Avinguda de l'Estadi 12-22, Anella Olímpica, 08038, Barcelona, Spain
Júlia Jubany & Rosa Angulo-Barroso
Manresa University (Universitat de Vic Universitat Central de Catalunya), Avinguda Universitària 4-6, 08242, Manresa, Barcelona, Spain
Júlia Jubany
Department of Rehabilitation Sciences and Physiotherapy, Faculty of Medicine and Rehabilitation Sciences, Ghent University, Sint-Pietersnieuwstraat 25, B-9000, Ghent, Belgium
Lieven Danneels
Department of Kinesiology, California State University, Northridge (CSUN), 18111 Nordhoff Street, 91330, Northridge, CA, USA
Rosa Angulo-Barroso
Correspondence to Júlia Jubany.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Jubany, J., Danneels, L. & Angulo-Barroso, R. The influence of fatigue and chronic low back pain on muscle recruitment patterns following an unexpected external perturbation. BMC Musculoskelet Disord 18, 161 (2017). https://doi.org/10.1186/s12891-017-1523-3
Muscle pattern
Semi-squat
|
CommonCrawl
|
Physics results in TCS?
It seems clear that a number of subfields of theoretical computer science have been significantly impacted by results from theoretical physics. Two examples of this are
Statistical mechanics results used in complexity analysis/heuristic algorithms.
So my question is are there any major areas I am missing?
My motivation is very simple: I'm a theoretical physicist who has come to TCS via quantum information and I am curious as to other areas where the two areas overlap.
This is a relatively soft question, but I don't mean this to be a big-list type question. I'm looking for areas where the overlap is significant.
soft-question quantum-computing statistical-physics physics
asked Oct 10 '10 at 1:16
Joe FitzsimonsJoe Fitzsimons
$\begingroup$ I don't know if complex systems count, so I'm not yet posting as an answer. it's a field has a lot to do with social network analysis, and networks in general, and has been invaded by physicists in large numbers, wielding weapons from statistics and thermodynamics. Whether it's been invaded by physics is a different story. $\endgroup$ – Suresh Venkat Oct 10 '10 at 2:50
$\begingroup$ I would think it counts. $\endgroup$ – Joe Fitzsimons Oct 10 '10 at 13:56
$\begingroup$ see also how are physics/CS getting united physics.se $\endgroup$ – vzn Apr 27 '14 at 15:33
The search technique simulated annealing is inspired by the physical process of annealing in metallurgy.
Annealing is a heat treatment where the strength and hardness of the substance being treated can change dramatically. Often this involves heating the substance to an extreme temperature and then allowing it to cool slowly.
Simulated annealing avoids local minima/maxima in search spaces by incorporating a degree of randomness (the temperature) in the search process. As the search process proceeds, the temperature gradually cools, which means that the amount of randomness in the search decreases. Apparently it is quite an effective search technique.
Dave ClarkeDave Clarke
$\begingroup$ supercooldave: My limited understanding was that simulated annealing only avoids local minima that are "sufficiently shallow." Is that correct? $\endgroup$ – Joshua Grochow Oct 11 '10 at 14:00
$\begingroup$ @Joshua: in general, simulated annealing does not always manage to avoid local minimal. It can always get stuck in the wrong place. Some experimentation is required to find a good starting point and so forth. $\endgroup$ – Dave Clarke Oct 11 '10 at 14:04
$\begingroup$ Of course, it bears noting that 'real' annealing doesn't always avoid local minima either! Defects (in the mathematical-physics sense) aren't unheard of. $\endgroup$ – Steven Stadnicki Jul 15 '11 at 21:25
$\begingroup$ If the temperature decrease takes places exponentially slowly, then simulated annealing gains many desirable global optimization properties. Of course, it also gains an exponential run time. $\endgroup$ – Elliot JJ Feb 19 '12 at 23:19
Going the other way around (from TCS to physics), matrix product states, PEPS (projected entangled pair states), MERA (multiscale entanglement renomalization ansatz) have been significantly informed by TCS ideas which were adapted in quantum information theory. These acronyms are all techniques for approximating the states of quantum spin systems that are used by condensed matter theorists, and in many cases these techniques seem to work better than any tools previously known.
Martin Schwarz
Peter Shor Peter Shor
$\begingroup$ One thing that has struck me about this area is that it seems to be more the theoretical physics community within quantum information rather than the TCS community (if we can really make such a distinction) that seems to be interested in these techniques. $\endgroup$ – Joe Fitzsimons Oct 23 '10 at 22:22
$\begingroup$ I would definitely agree. I tried to get a grad student interested in them early on, but his reaction was "bleah ... these are just heuristic approximation methods, and you can't say anything rigorous about them." Of course, this turned out to be incorrect. $\endgroup$ – Peter Shor Oct 23 '10 at 23:59
$\begingroup$ (@Shor) I liked this answer very much, and have provided a companion answer with several more references---at least one of which (Joseph Landsburg's 2008 survey Geometry and the complexity of matrix multiplication) is most definitely at the TCS end of the spectrum. cstheory.stackexchange.com/questions/2074/… $\endgroup$ – John Sidles Jul 15 '11 at 21:24
Complex systems is a field has a lot to do with social network analysis, and networks in general, and has been invaded by physicists in large numbers, wielding weapons from statistics and thermodynamics. Whether it's been invaded by physics is a different story.
Suresh VenkatSuresh Venkat
$\begingroup$ I'm developing quite a strong interest in networks and social network analysis. Do you have any references? $\endgroup$ – Dave Clarke Jun 13 '11 at 17:03
$\begingroup$ hmm. Best to start with the Kleinberg/Easley book (which is a good undergrad-level text). Then you could work forwards and backwards from work by Aaron Clauset and Mark Newman $\endgroup$ – Suresh Venkat Jun 14 '11 at 9:33
A result of Pour-El and Richards Adv. Math. 39 215 (1981) gives the existence of noncomputable solutions to the 3D wave equation for computable initial conditions by using the wave to simulate a universal Turing machine.
S HuntsmanS Huntsman
$\begingroup$ I would also mention DNA computing as an area of overlap, albeit with more tenuous connections to theoretical physics per se. $\endgroup$ – S Huntsman Oct 10 '10 at 2:08
$\begingroup$ I more had in mind areas where TCS benefited from results in physics, rather than the other way around. $\endgroup$ – Joe Fitzsimons Oct 10 '10 at 12:49
$\begingroup$ Well then (although it might be considered implicit in or related to some other stuff mentioned on this page) I would be remiss in not mentioning the theory of reversible computation, most notably the circle of ideas born from Landauer's work, which has influenced many more areas besides quantum computing. $\endgroup$ – S Huntsman Oct 10 '10 at 17:34
$\begingroup$ To comment on Suresh's answer (not enough rep to comment there): there have been many fruitful applications of ideas in physics to the analysis of dynamics on networks. As one example I recall a paper discussing evidence that TCP traffic exhibited self-organized criticality. As another example a few researchers (including myself) have worked on applying ideas from physics (not just entropy) to characterizing network traffic for anomaly detection. Of course, this leaves the T out of TCS. $\endgroup$ – S Huntsman Oct 10 '10 at 17:45
The connection goes the other way around, too. A while ago theoretical computer scientists who work in domain theory got interested in relativity. They proved results about how to reconstruct the structure of spacetime from the causality structure. This is something quite familiar to domain theorists, where the beasic objects of interest are partial orders whose topology is determined by the order. You might have a look at http://www.cs.mcgill.ca/~prakash/Pubs/dom_gr_review.pdf
Andrej BauerAndrej Bauer
24.8k11 gold badge6767 silver badges112112 bronze badges
$\begingroup$ Yes, actually I heard Prakash speak about this at his workshop in Barbados. Really interesting work. I was however under the impression that he also had a physics background. That aside, there are certainly contributions in both directions. It just happens that I was particularly interested in finding out about one direction in particular. Presumably asking about the influence of TCS on physics would be better suited to a physics website, since people in the field which adapts ideas from a second field are better placed to determine which of these have made significant impact on the first. $\endgroup$ – Joe Fitzsimons Oct 11 '10 at 12:05
A very old example (which could be subsumed by Suresh's answer, however, this is a different tack) is the influence of the theory of electrical networks, e.g. Kirchhoff's circuit laws, on combinatorics, graph theory, and probability.
RJKRJK
One area that has seen a few applications, but not IMO enough is approximating discrete structures or processes with analytic approximations. This is big business in mathematics (eg., analytic number theory) and physics (all of statistical mechanics), but hasn't proved as popular in CS for some reason.
A famous application of this was in the design of the Connection Machine. This was a massively parallel machine, and as part of its design they need to figure out how big to make the buffers in the router. Feynman modelled the router with PDEs, and showed the buffers could be smaller than the traditional inductive arguments could establish. Danny Hillis recounts the story in this essay.
Neel KrishnaswamiNeel Krishnaswami
30.2k8989 silver badges158158 bronze badges
$\begingroup$ What about analytic combinatorics (Flajolet and Sedgewick)? $\endgroup$ – RJK Oct 11 '10 at 13:08
Gauge Theory for heuristic approximations to integer programming (a few of Misha Chertkov's papers). Renormalization group methods for combinatoric counting, Ch.10-12 of Rudnick/Gaspari's "Elements of the Random Walk." Applying Feymann's path integral decomposition (ie, Section 9.5.1) to counting self-avoiding walks. For connection to TCS, note that regime of tractability for approximate counting on graphs depends on the growth rate of self-avoiding walks.
Yaroslav BulatovYaroslav Bulatov
Statistical physics has given computer scientists a novel way of looking at SAT, as overviewed here. The idea is that as the ratio of clauses to variables involved in a 3-SAT formula increases from around 4 to around 5 we go from being able to solve the vast majority of 3-SAT instances to being able to solve very few. This transition is regarded as a "phase change" in SAT.
This idea gained particular notoriety this past summer from Deolalikar's alleged P vs. NP paper.
Huck BennettHuck Bennett
$\begingroup$ Yikes, I just realized that Joe referenced this in his original question. Hopefully this elaborates a bit. $\endgroup$ – Huck Bennett Dec 3 '10 at 7:46
Force-based graph drawing algorithms are another example. The idea is to consider each edge to be a spring and the layout of the nodes of the graph corresponds to finding equilibrium in the collection of springs.
$\begingroup$ I wouldn't have thought that particularly TCS, but it's such a cool technique you get a +1 from me. After all, some areas of computer science are very heavily dependent on physics (i.e. SIGGRAPH). $\endgroup$ – Joe Fitzsimons May 25 '11 at 14:13
$\begingroup$ Graphs are surely TCS. And they need to be drawn. And David Eppstein does graph drawing. (This is my compelling argument.) $\endgroup$ – Dave Clarke May 25 '11 at 14:17
$\begingroup$ Ok, I'll accept that argument. $\endgroup$ – Joe Fitzsimons May 25 '11 at 14:21
$\begingroup$ This technique is a major player in graph drawing. definitely worth mentioning $\endgroup$ – Suresh Venkat May 25 '11 at 14:34
$\begingroup$ Great example! +1 from me $\endgroup$ – George Jul 22 '11 at 21:29
Early distributed systems theory, especially papers by Leslie Lamport et al., has had some impact from Special Relativity to get the correct picture w.r.t. to (fault-tolerant) agreement on a global system state. See entry 27. (Time, Clocks and the Ordering of Events in a Distributed System, Communications of the ACM 21, 7 (July 1978), 558-565) in the Writings of Leslie Lamport, where Lamport gives the following background information on his paper:
The origin of this paper was a note titled The Maintenance of Duplicate Databases by Paul Johnson and Bob Thomas. I believe their note introduced the idea of using message timestamps in a distributed algorithm. I happen to have a solid, visceral understanding of special relativity (see [5]). This enabled me to grasp immediately the essence of what they were trying to do. Special relativity teaches us that there is no invariant total ordering of events in space-time; different observers can disagree about which of two events happened first. There is only a partial order in which an event e1 precedes an event e2 iff e1 can causally affect e2. I realized that the essence of Johnson and Thomas's algorithm was the use of timestamps to provide a total ordering of events that was consistent with the causal order. This realization may have been brilliant. Having realized it, everything else was trivial. Because Thomas and Johnson didn't understand exactly what they were doing, they didn't get the algorithm quite right; their algorithm permitted anomalous behavior that essentially violated causality. I quickly wrote a short note pointing this out and correcting the algorithm.
Martin SchwarzMartin Schwarz
I have fleshed-out this answer with an extended answer on MathOverflow to Gil Kalai's community wiki question "[What is] A Book You Would Like to Write."
The extended answer seeks to link fundamental issues in TCS and QIT to practical issues in healing and regenerative medicine.
This answer extends Peter Shor's answer, which discusses the roles of matrix product states in TCS and physics. Two recent surveys in the Bulletin of the AMS are relevant to matrix product states, and both surveys are well-written, free of pay-wall restrictions, and reasonably accessible to non-specialists:
Joseph M. Landsberg's Geometry and the complexity of matrix multiplication (2008)
Alvaro Pelayo's and San Vu Ngoc's Symplectic theory of completely integrable Hamiltonian systems
The mathematical arena for Landsberg's survey is secant varieties of Segre varieties, while the arena for Pelayo's and Ngoc's survey is four-dimensional symplectic manifolds … it takes awhile to appreciate that these two arenas both are matrix product states, as viewed respectively from a computational perspective (Landsburg) and a geometric perspective (Palayo and Ngoc). Moreover, Palayo and Ngoc include in their survey a discussion of Babelon, Cantini, and Douçot's A semi-classical study of the Jaynes–Cummings model (noting that the Jaynes–Cummings model is often encountered in the literature of condensed matter physics and quantum computing).
Each of these references goes far to illuminate the others. In particular, it has been helpful in our own (very practical) spin dynamical calculations to appreciate that the quantum state-spaces that are described variously in the literature as tensor network states, matrix product states, and secant varieties of Segre varieties are richly endowed with singularities whose algebraic, symplectic, and Riemannian structure is at present very incompletely understood (as Pelayo and Ngoc review).
For our engineering purposes, the Landsburg/algebraic geometry approach, in which the state-space of quantum dynamics is viewed as an algebraic variety rather than a vector space, is emerging as the most mathematically natural. This is surprising to us, but in common with many researchers, we find that the toolset of algebraic geometric is gratifyingly effective in validating and speeding practical quantum simulations.
Quantum simulationists presently enjoy the puzzling circumstance that large numerical quantum simulations very often perform much better than we have any known reason to expect. As mathematicians and physicists arrive at a shared understanding, this puzzlement surely will diminish and the enjoyment surely will remain. Good! :)
John SidlesJohn Sidles
Much of the math that we use was originally invented to solve physics problems. Examples include calculus (Newtonian gravity) and Fourier series (heat equation).
Warren SchudyWarren Schudy
$\begingroup$ In a similar vein, Belkin, Narayanan and Niyogi (FOCS '06, dx.doi.org/10.1109/FOCS.2006.34) used mathematical analysis from the study of heat flow and diffusion to give a fast randomized algorithm for computing the surface area of a convex body in n dimensions. $\endgroup$ – arnab Oct 10 '10 at 17:33
$\begingroup$ good example. although is this an example of physics or mathematics ? :) $\endgroup$ – Suresh Venkat Oct 10 '10 at 17:35
There is a recent paper which establishes the connection between Computer Security and the 2nd principal of thermodynamic.
http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=6266166
qspqsp
I know some examples in machine learning. It is very common for thermodynamic ideas to be used in this area: Boltzmann machine, Hopfield network, Wake-sleep algorithm. Markov Chain were initially used in physics, and today they have applications in reinforcement learning. And has a technique (Momentum) that is used to improve the optimization algorithm of gradient descedent, which is inspired by a concept that comes from mechanics.
The mathematical technique used in Lagrangian Mechanics are also used in optimization. As already stated, many ideas in mathematics were initially developed to solve physics problems.
answered Jan 8 at 19:03
Raphael AugustoRaphael Augusto
The concept of potential is related to many different areas of physics. In cs, potential is used in amortized analysis of data structures. We can look at how each step affects the entropy of the system and therefore get an average (amortized) cost of an operation with a given data structure. This has given rise to many theoretically better data structures like the fibonacci heap.
NateNate
to add/fill in some gap in the current excellent answers/coverage—there seems to be a strong connection between TCS and thermodynamics in various ways that hasnt yet been fully explored but is the frontiers of active research. there is a transition point associated with SAT but it seems possibly there are transition points associated with other (or even all) complexity classes as well. the SAT transition point is associated with a difference between "easy" (P) and "hard" (NP) instances, but arguably all complexity class boundaries must lead to the same transition point-like property.
consider a Turing Machine. it already measures its operation in normally physical dimensions of "time" and "space". but note that it apparently also does one unit of "work" in moving from square to square and making a transition. in physics the unit of work is Joules, which is also a measure of energy. so it appears that complexity classes have some relation to energy levels, boundaries, or regimes.
quantum mechanics theory increasingly sees space and time itself, the universe, as a sort of computing system. it appears it has some "minimal computation units" intrinsic to its nature, probably related to the Plank length. so examination of minimal Turing machines for problems also implies and relates to minimal physical/energy systems or even volumes of space required. [3]
also, the key concept of entropy shows up repeatedly in TCS and physics/thermodynamics and may be a unifying principle with still more active research revealing its underlying nature. [1,2]
[1] entropy in information theory, wikipedia
[2] what is the CS defn of entropy, stackoverflow
[3] What is the Volume of Information? tcs.se
vznvzn
$\begingroup$ You realise that I answered the tcs.se question, right? $\endgroup$ – Joe Fitzsimons Jan 23 '13 at 15:03
$\begingroup$ I would like to understand why this question was downvoted. Downvoting without explanation helps no one, since the reasons can well be non technical. I understand that the OP was aware of some or all of this answer, but since he did not mention it in the question ... cc @JoeFitzsimons $\endgroup$ – babou Apr 27 '14 at 11:50
Not the answer you're looking for? Browse other questions tagged soft-question quantum-computing statistical-physics physics or ask your own question.
Should we consider $\mathsf{P} \neq \mathsf{NP}$ a law of nature?
What is the Volume of Information?
When does (or should) Theoretical CS care about intuitionistic proofs?
What does one mean by heuristic statistical physics arguments?
Physics and TCS
Beautiful results in TCS
Quantum algorithms for QED computations related to the fine structure constants
Is it conceivable at all that the standard model of physics can outperform a quantum computer in any sense?
Which areas of computer science have lots of overlap with physics?
Isn't it "trivial" to represent/reduce any classical physics problem into a Spin-Glass which is NP-Complete?
|
CommonCrawl
|
Technical advance
Introducing a new estimator and test for the weighted all-cause hazard ratio
Ann-Kathrin Ozga ORCID: orcid.org/0000-0002-9501-73011 &
Geraldine Rauch2,3
The rationale for the use of composite time-to-event endpoints is to increase the number of expected events and thereby the power by combining several event types of clinical interest. The all-cause hazard ratio is the standard effect measure for composite endpoints where the all-cause hazard function is given as the sum of the event-specific hazards. However, the effect of the individual components might differ, in magnitude or even in direction, which leads to interpretation difficulties. Moreover, the individual event types often are of different clinical relevance which further complicates interpretation. Our working group recently proposed a new weighted effect measure for composite endpoints called the 'weighted all-cause hazard ratio'. By imposing relevance weights for the components, the interpretation of the composite effect becomes more 'natural'. Although the weighted all-cause hazard ratio seems an elegant solution to overcome interpretation problems, the originally published approach has several shortcomings: First, the proposed point estimator requires pre-specification of a parametric survival model. Second, no closed formula for a corresponding test statistic was provided. Instead, a permutation test was proposed. Third, no clear guidance for the choice of the relevance weights was provided. In this work, we will overcome these problems.
Within this work a new non-parametric estimator and a related closed formula test statistic are presented. Performance of the new estimator and test is compared to the original ones by a Monte-Carlo simulation study.
The original parametric estimator is sensible to miss-specifications of the survival model. The new non-parametric estimator turns out to be very robust even if the required assumptions are not met. The new test shows considerably better power properties than the permutation test, is computationally much less expensive but might not preserve type one error in all situations. A scheme for choosing the relevance weights in the planning stage is provided.
We recommend to use the non-parametric estimator along with the new test to assess the weighted all-cause hazard ratio. Concrete guidance for the choice of the relevance weights is now available. Thus, applying the weighted all-cause hazard ratio in clinical applications is both - feasible and recommended.
In many clinical trials, the aim is to compare two treatment groups with respect to a rarely occurring event like myocardial infarction or death. In this situation, a high number of patients has to be included and observed over a long period of time for a demonstration of a relevant treatment effect and to reach an acceptable power. Combining several events of interest within a so-called composite endpoint can lead to a smaller required sample size and save time as a higher number of events is mend to increase the power. The common treatment effect measure for composite endpoints is the all-cause hazard ratio. This effect measure is based on the total number of events irrespective of their type. Commonly, either the log-rank test or the Cox proportional hazards model [1–4] are used for analysing the all-cause hazard ratio. However, the interpretation of the all-cause hazard ratio as a composite treatment effect can be difficult. This is due to two reasons: First, the composite might not necessarily reflect the effects of the individual components which can differ in magnitude or even in direction [5–7]. Second, the distinct event types could be of different clinical relevance. For example, the fatal event 'death' is more relevant than a non-fatal event like 'cardiovascular hospital admission'. Moreover, the less relevant event often contributes a higher number of events and therefore has a higher influence on the composite effect than the less relevant event.
Current guidelines on clinical trial methodology hence recommend to combine only events of the same clinical relevance [3, 8]. However, this is rather unrealistic in clinical practice, as important components like 'death' cannot be excluded from the primary analysis if a fatal event is clearly more relevant than any other non-fatal event. Therefore, to address the problems that arise within the analysis of a composite endpoint other methods to ease the interpretation of results are needed. An intuitive approach could be to define a weighted composite effect measure with weights that reflect the different levels of clinical relevance of the components. Weighted effect measures have been proposed and compared by several authors [9–14]. Some of the main disadvantages of these approaches include the high dependence on the censoring mechanism and on competing risks [13, 14]. Recently, Rauch et al. [15] proposed a new weighted effect measure called the 'weighted all-cause hazard ratio'. This new effect measure is defined as the ratio between the weighted average of the cause-specific hazards for two groups. Thereby, the predefined weights are assigned to the individual cause-specific hazards. With equal weights for the components the weighted all-cause hazard ratio corresponds to the common all-cause hazard ratio and thus defines a natural extension of the standard approach.
Although this new weighted effect measure seems an elegant solution to overcome interpretation problems, the originally published approach has several shortcomings: 1. The proposed original estimator for the weighted all-cause hazard ratio requires pre-specification of a parametric survival model to estimate the individual cause-specific hazards. The form of the survival model, however, is usually not known in the planning stage of a trial. 2. No closed formula for a corresponding test statistic was introduced but a permutation test was used instead which comes along with a high computational effort. 3. No clear guidance for the choice of the relevance weighting factors was provided. In this work, we want to address these issues to make the weighted all-cause hazard ratio more appealing for practical application. In particular we will provide answers to the following questions:
How robust is the original estimator for the weighted all-cause hazard ratio against miss-specifications of the underlying parametric survival model?
How robust is the new alternative non-parametric estimator for the weighted all-cause hazard ratio?
How can we derive a closed formula test statistic for testing the weighted all-cause hazard ratio?
How do the different estimators and tests behave in a direct performance comparison?
What are the required steps when choosing adequate weighting factors in the planning stage?
This paper is organized as follows: In the Methods Section, we start by introducing the standard unweighted approach for analysing a composite time-to-first event endpoint. In the same section, the weighted all-cause hazard ratio is introduced as well as the original parametric estimator and the permutation test as recently proposed by Rauch et al. [15]. A new non-parametric estimator for the weighted all-cause hazard ratio and a related closed formula test is introduced subsequently. Next, we provide a step-by-step guidance on the choice of the relevance weighting factors. In the Results Section, the different estimators and tests for the weighted all-cause hazard ratio are compared by means of a Monte-Carlo simulation study to evaluate their performance for various data scenarios, in particular those who meet and those who violate the underlying model assumptions. We discuss our methods and results and we finish the article with concluding remarks.
The standard all-cause hazard ratio
The interest lies, throughout this work, in a two-arm clinical trial where an intervention I shall be compared to a control C with respect to a composite time-to-event endpoint. A total of n individuals are randomized in a 1:1 allocation to the two groups. The composite endpoint consists of k components EPj, j=1,...,k. It is assumed that a lower number of events corresponds to a more favourable result. The observational period is given by the interval [0,τ]. The study aim is to demonstrate superiority of the new intervention and therefore a one-sided test problem is formulated.
Definitions and test problem
The all-cause hazard function for the composite endpoint is parametrized as
$$\begin{array}{*{20}l} &\lambda_{CE,i}(t)=\lambda_{CE,0}(t)\exp(\beta_{CE}X_{i}),\\ & i=1,...,n,\ \end{array} $$
where Xi is the treatment indicator which equals 1 when the individual i belongs to the intervention group and 0 when it belongs to the control. Equivalently, the cause-specific hazards for the components are given as
$$\begin{array}{*{20}l} &\lambda_{EP_{j},i}(t)=\lambda_{EP_{j},0}(t)\exp(\beta_{EP_{j}}X_{i}),\\ & i=1,...,n,\ j=1,...,k. \end{array} $$
Note that the hazard for the composite endpoint is the sum of the cause-specific hazards for the components [15]
$$\begin{array}{@{}rcl@{}} \lambda_{CE}(t)=\sum_{j=1}^{k}{\lambda_{EP_{j}}(t)}. \end{array} $$
The all-cause hazard ratio for the composite is given as
$$\begin{array}{@{}rcl@{}} \theta_{CE}=\exp(\beta_{CE})=\frac{\lambda^{I}_{CE}(t)}{\lambda^{C}_{CE}(t)}, \end{array} $$
where the indices I and C denote the group allocation and proportional hazards are assumed so that θCE is constant in time. Note that the proportional hazards assumption can only hold true for both the composite and for the components if equal cause-specific baseline hazards are assumed across all components.
As motivated above, a one-sided test problem for the all-cause hazard ratio is considered. The hypotheses thus read as
$$\begin{array}{@{}rcl@{}} H_{0}:\theta_{CE}\geq 1 \quad \textnormal{versus} \quad H_{1}: \theta_{CE} <1. \end{array} $$
Point estimator and test statistic
For estimating the all-cause hazard ratio, a semi-parametric estimator for the all-cause hazard ratio \(\widehat {\theta }_{CE}\) can be obtained by means of partial maximum-likelihood estimator from the well-known Cox-model [1].
The most common statistical test to assess the null hypothesis stated in (2) is the log-rank test. Let tl, l=1,...,d, denote the distinct ordered event times for the pooled sample of both groups, where d is the total number of observed events irrespective of its type within the observational period [0,τ]. Moreover, let \(d_{EP_{j},l}=d_{EP_{j},l}^{I}+d_{EP_{j},l}^{C},\ l=1,...,d,\ j=1,...,k\) denote the observed amount of individuals that experience an event of type j at time tl in the pooled sample given as the sum of the specific group-wise number of events. Similarly, let \(d_{l}=d_{l}^{I}+d_{l}^{C}=\sum \limits _{j=1}^{k}d_{EP_{j},l}^{I}+\sum \limits _{j=1}^{k}d_{EP_{j},l}^{C},\) denote the observed amount of individuals that experience an event of any type until time tl in the pooled sample given as the sum of the group-wise number of events. The number of individuals at risk just before time tl is denoted as \(n_{l}=n_{l}^{I}+n_{l}^{C}\). The Nelson-Aalen estimators for the cumulative all-cause hazard functions over the entire observational period are given as
$$\begin{array}{@{}rcl@{}} \hat\Lambda_{CE}^{I}(\tau)=\sum_{t_{l}\leq\tau}\frac{\sum\limits_{j=1}^{k}{d^{I}_{EP_{j},l}}}{n_{l}^{I}}=\sum_{t_{l}\leq\tau}\frac{{d^{I}_{l}}}{n_{l}^{I}} \end{array} $$
$$\begin{array}{@{}rcl@{}} \hat\Lambda_{CE}^{C}(\tau)=\sum_{t_{l}\leq \tau}\frac{\sum\limits_{j=1}^{k}{d^{C}_{EP_{j},l}}}{n_{l}^{C}}=\sum_{t_{l}\leq \tau}\frac{{d^{C}_{l}}}{n_{l}^{C}}. \end{array} $$
Under the null hypothesis stated in (2), the cumulative all-cause hazards of both groups are equivalent. This means that the sum of the cause-specific hazards are assumed to be equivalent. This does not automatically imply that the cause-specific hazards are also equivalent. However, this more specific assumption is required to deduce the test statistic of the weight-based log-rank test. Under the null hypothesis (2) and the additional assumption that the cause-specific hazards are equivalent, the random variable \(D^{I}_{l},\ l=1,...,d^{I}\), for randomly sampling \(d_{l}^{I}\) events from \(n_{l}^{I}\) patients where \(n_{l}^{I}\) is a subset of the pooled sample with \(n_{l}^{I}+n_{l}^{C}\) individuals including a total of dl events at a fixed time point tl is hypergeometrically distributed as
$$\begin{array}{@{}rcl@{}} D^{I}_{l}\sim Hyp\left(n_{l}^{I}+n_{l}^{C},d_{l}, d_{l}^{I}\right). \end{array} $$
Then the expectation of the additive \(D^{I}_{l}\) over all distinct tl≤τ is
$$\begin{array}{@{}rcl@{}} \mathbb{E}\left(\sum_{t_{l}\leq \tau}D^{I}_{l}\right)=\sum_{t_{l}\leq \tau}\mathbb{E}\left(D^{I}_{l}\right) =\sum_{t_{l}\leq \tau}\frac{n_{l}^{I}}{n_{l}^{I}+n_{l}^{C}}d_{l}. \end{array} $$
and the variance is given as
$$\begin{array}{*{20}l} &Var\left(\sum_{t_{l}\leq \tau}D^{I}_{l}\right) =\sum_{t_{l}\leq \tau}Var\left(D_{l}^{I}\right)\\ &=\sum_{t_{l}\leq \tau}{\frac{n_{l}^{I} n_{l}^{C} (n_{l}^{I}+n_{l}^{C}- d_{l}) d_{l}}{(n_{l}^{I}+n_{l}^{C})^{2}(n_{l}^{I}+n_{l}^{C}-1)}}. \end{array} $$
The corresponding log-rank test thus reads as [16]
$$\begin{array}{@{}rcl@{}} LR:= \frac{\sum\limits_{t_{l}\leq\tau}{\left(d_{l}^{I}-\frac{n_{l}^{I} d_{l}}{n_{l}^{I}+n_{l}^{C}}\right)}}{\sqrt{\sum\limits_{t_{l}\leq\tau} {\frac{n_{l}^{I} n_{l}^{C} (n_{l}^{I}+n_{l}^{C}- d_{l}) d_{l}}{(n_{l}^{I}+n_{l}^{C})^{2}(n_{l}^{I}+n_{l}^{C}-1)}}}}. \end{array} $$
The test statistic LR is approximately standard normally distributed under the null hypothesis given in (2). Negative values of the test statistic favour the intervention and therefore the null hypothesis is rejected if LR≤−z1−α, where z1−α is the corresponding (1−α)-quantile of the standard normal distribution and α is the one-sided significance level.
The weighted all-cause hazard ratio
The idea of the weighted all-cause hazard ratio is to replace the standard all-cause hazard given in (1) by a weighted sum of the cause-specific hazards using predefined relevance weights for the individual components that refer to their clinical relevance. The weighted all-cause hazard is then given as
$$\begin{array}{@{}rcl@{}} \lambda^{w}_{CE}(t):=\sum_{j=1}^{k}{w_{EP_{j}}\cdot \lambda_{EP_{j}}(t)} \end{array} $$
where the non-negative weights \(w_{EP_{j}}\geq 0\), j=1,...,k, are reflecting the clinical relevance of the components EPj, j=1,...,k. If the weights are all equally set to 1 \((w_{EP_{1}}=w_{EP_{2}}=...=w_{EP_{k}}=1)\), then the weighted all-cause hazard corresponds to the standard all-cause hazard.
The 'weighted all-cause hazard ratio' as proposed by Rauch et al. [15] is then given as
$$\begin{array}{@{}rcl@{}} \theta^{w}_{CE}(t):=\frac{\lambda^{I,w}_{CE}(t)}{\lambda^{C,w}_{CE}(t)}, \end{array} $$
where the indices I and C denote the group allocation. Note that the weighted all-cause hazard ratio is a time-dependent effect measure except for the case of equal baseline hazards across the components [15] which refers to
$$\begin{array}{@{}rcl@{}} \lambda_{CE,0}(t)=\lambda_{EP_{1},0}(t)=...=\lambda_{EP_{k},0}(t). \end{array} $$
The weighted all-cause hazard ratio can also be integrated over the complete observational period [0,τ]
$$\begin{array}{@{}rcl@{}} \Theta^{w}_{CE}(\tau):=\frac{1}{\tau}\int_{0}^{\tau}{\theta^{w}_{CE}(t) {\mathrm{d}}t}. \end{array} $$
In the remainder of the work, we will concentrate on the weighted all-cause hazard ratio at a predefined time-point for the sake of simplicity. Again, a one-sided test problem for the weighted all-cause hazard ratio is considered
$$\begin{array}{@{}rcl@{}} H_{0}:\theta^{w}_{CE}\geq 1 \quad \textnormal{versus} \quad H_{1}: \theta^{w}_{CE} <1. \end{array} $$
The hypotheses to be assessed in the confirmatory analysis are thus equivalent to the common unweighted approach.
Original point estimator and test statistic
In order to estimate the weighted all-cause hazard ratio Rauch et al. [15] proposed to identify and estimate the cause-specific hazards via a parametric survival model. Rauch et al. [15] thereby focused on the Weibull model. This approach is thus based on the assumption that the cause-specific hazards for each component are proportional. With the estimated cause-specific hazards \(\hat \lambda ^{I}_{EP_{j}}\) and \(\hat \lambda ^{C}_{EP_{j}}\) derived from the Weibull model, a parametric estimator for the weighted all-cause hazard ratio is given by
$$\begin{array}{@{}rcl@{}} \hat\theta^{w}_{CE}(t)=\frac{\sum\limits_{j=1}^{k}{w_{EP_{j}}\cdot \hat\lambda^{I}_{EP_{j}}(t)}}{\sum\limits_{j=1}^{k}{w_{EP_{j}}\cdot \hat\lambda^{C}_{EP_{j}}(t)}}. \end{array} $$
The pre-specification of a survival model to identify the cause-specific hazard must be seen as a considerable restriction as the shape of the survival distribution is usually not known in advance. Thus, it is of interest to evaluate how sensible the parametric estimator reacts when the survival model is miss-specified. Moreover, there is the general interest in deriving a less restrictive non-parametric estimator.
A related variance estimator for (8) cannot easily be deduced and thus an asymptotic distribution of the parametric estimator given in (8) is not available. Therefore, Rauch et al. [15] considered a permutation test to test the null hypothesis specified above. For the permutation test the sampling distribution is built by resampling the observed data. Thereby, the originally assigned treatment groups are randomly assigned to the observation without replacement in several runs. Although this is an elegant option without the need to make further restrictive assumptions, the disadvantage is that such a permutation test is not available as a standard application in statistical software but requires implementation. Moreover, depending on the trial sample size and the computer capacities, this is a very time consuming approach.
New point estimator and closed formula test statistic
To derive the new point estimator, we will assume in the following that the baseline hazards for all individual components and for the composite are equivalent within each group, meaning that
$$\begin{array}{@{}rcl@{}} \lambda_{CE,i}(t)&=&\lambda_{0}(t)exp(\beta_{CE}X_{i})\\ &=&\lambda_{0}(t)\sum_{j=1}^{k} exp(\beta_{EP_{j}}X_{i}) \end{array} $$
i=1,...,n, and thus (4) reads as
$$\begin{array}{@{}rcl@{}} \lambda^{w}_{CE,i}(t)=\lambda_{0}(t)\sum_{j=1}^{k} w_{EP_{j}}exp(\beta_{EP_{j}}X_{i}). \end{array} $$
This is a very restrictive assumption usually not met in practice. The assumption is only required to formally derive the new non-parametric estimator. We do not generally focus on data situations were this assumption is fulfilled. The estimator is only relevant for practical use if deviations from this assumptions produce no relevant bias. This will be investigated in detail in the sections Simulation scenarios and Results.
Under this assumption, the baseline hazards in the representation of the weighted all-cause hazard ratio cancel out. By this, the weighted all-cause hazard ratio is no longer time-dependent. It is therefore possible to replace the cause-specific hazards by the cumulative cause-specific hazards:
$$\begin{array}{@{}rcl@{}} \theta^{w}_{CE}=\theta^{w}_{CE}(t)&=&\frac{\sum\limits_{j=1}^{k} w_{EP_{j}}\lambda^{I}_{EP_{j}}(t)}{\sum\limits_{j=1}^{k} w_{EP_{j}}\lambda^{C}_{EP_{j}}(t)}\notag\\ &=&\frac{\sum\limits_{j=1}^{k} w_{EP_{j}}\lambda_{0}(t)exp(\beta_{EP_{j}}\cdot 1)}{\sum\limits_{j=1}^{k} w_{EP_{j}}\lambda_{0}(t)exp(\beta_{EP_{j}}\cdot 0)}\notag\\ &=&\frac{\sum\limits_{j=1}^{k} w_{EP_{j}}\int_{0}^{t}\lambda_{0}(s)exp(\beta_{EP_{j}})\mathrm{d}s}{\sum\limits_{j=1}^{k} w_{EP_{j}}\int_{0}^{t}\lambda_{0}(s)exp(0)\mathrm{d}s} \notag\\ &=&\frac{\sum\limits_{j=1}^{k} w_{EP_{j}}\Lambda^{I}_{EP_{j}}(t)}{\sum\limits_{j=1}^{k} w_{EP_{j}}\Lambda^{C}_{EP_{j}}(t)}, \end{array} $$
where \(\Lambda _{EP_{j}}(t),\ j=1,...,k\), refer to the corresponding cause-specific cumulative hazards over the period [0,t]. With Xi equal to 1 if the individual i belongs to the intervention and 0 otherwise. This representation can be used to derive a non-parametric estimator for the weighted all-cause hazard ratio using the corresponding non-parametric Nelson-Aalen estimators given as
$$\begin{array}{@{}rcl@{}} \hat\Lambda^{I}_{EP_{j}}(t):=\sum_{t_{l}\leq t}\frac{d_{EP_{j},l}^{I}}{n_{l}^{I}},\quad \hat\Lambda^{C}_{EP_{j}}(t):=\sum_{t_{l}\leq t}\frac{d_{EP_{j},l}^{C}}{n_{l}^{C}}, \end{array} $$
using the notations given in section Point estimator and test statistic. By this a non-parametric estimator for the weighted all-cause hazard ratio is given by
$$\begin{array}{@{}rcl@{}} \widetilde\theta^{w}_{CE}(t):= \frac{\sum\limits_{j=1}^{k} w_{EP_{j}}\cdot\hat\Lambda^{I}_{EP_{j}}(t)}{\sum\limits_{j=1}^{k}w_{EP_{j}}\cdot\hat\Lambda^{C}_{EP_{j}}(t)}. \end{array} $$
In contrast to the parametric estimator \(\hat \theta ^{w}_{CE}(t)\) given in (8), the non-parametric estimator \(\widetilde {\theta }^{w}_{CE}(t)\) given in (10) does not require the pre-specification of a survival model. However, the correctness of the non-parametric estimator is still based on the assumption of equal cause-specific baseline hazards. In case the baseline hazards differ, \(\widetilde {\theta }^{w}_{CE}(t)\) can be calculated but represents a biased estimator for \(\theta ^{w}_{CE}(t)\). Therefore, it is of interest to evaluate how sensible the non-parametric estimator reacts when the equal baseline hazards assumption is violated.
An alternative testing procedure to the discussed permutation test can be formulated by a weight-based log-rank test statistic derived from a modification of the common log-rank test statistic given in (3). We use the expression 'weight-based log-rank test' instead of 'weighted log-rank test', as in the literature the weighted log-rank test refers to weights which are assigned to the different observation time points whereas we aim to weight the different event types of a composite endpoint.
Under the null hypothesis (7) and under the assumption that the weighted all-cause hazards are equal between groups, the random variable \(D_{EP_{j},l}^{I},\ j=1,...,k\), for randomly sampling \(d_{EP_{j},l}^{I}\) events of type EPj from \(n_{l}^{I}\) patients where \(n_{l}^{I}\) is a subset of the pooled sample with \(n_{l}^{I}+n_{l}^{C}\) individuals including a total of dl events at a fixed time point tl is hypergeometrically distributed as
$$\begin{array}{@{}rcl@{}} D^{I}_{EP_{j},l}\sim Hyp\left(n_{l}^{I}+n_{l}^{C},d_{EP_{j},l}, d_{EP_{j},l}^{I}\right). \end{array} $$
The expectation of the additive weighted \(D^{I}_{EP_{j},l}\) over all distinct tl≤τ is given as
$$\begin{array}{*{20}l} &\mathbb{E}\left(\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}\cdot D_{EP_{j},l}^{I}\right)\\&=\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}\mathbb{E}\left(D_{EP_{j},l}^{I}\right)\\ &=\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}\frac{n_{l}^{I}}{n_{l}^{I}+n_{l}^{C}}d_{EP_{j},l}\\&=\sum_{t_{l}\leq \tau}\frac{n_{l}^{I}}{n_{l}^{I}+n_{l}^{C}}\sum_{j=1}^{k}w_{EP_{j}}d_{EP_{j},l} \end{array} $$
$$\begin{array}{*{20}l} &Var\left(\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}\cdot D_{EP_{j},l}^{I}\right)\\ &=\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}^{2} Var\left(D_{EP_{j},l}^{I}\right)\\ &=\sum_{t_{l}\leq \tau}\sum_{j=1}^{k}w_{EP_{j}}^{2}{\frac{n_{l}^{I} n_{l}^{C} (n_{l}^{I}+n_{l}^{C}- d_{EP_{j},l}) d_{EP_{j},l}}{(n_{l}^{I}+n_{l}^{C})^{2}(n_{l}^{I}+n_{l}^{C}-1)}}\\ &=\sum_{t_{l}\leq \tau}\frac{n_{l}^{I} n_{l}^{C} }{(n_{l}^{I}+n_{l}^{C})^{2}(n_{l}^{I}+n_{l}^{C}-1)} \cdot \\ &\left((n_{l}^{I}+n_{l}^{C})\sum_{j=1}^{k}w_{EP_{j}}^{2}\cdot d_{EP_{j},l} - \sum_{j=1}^{k}w_{EP_{j}}^{2}\cdot d^{2}_{EP_{j},l}\right), \end{array} $$
assuming that no events of different types occur at the same time point.
Thus, the weight-based log-rank test for the proposed weighted effect measure can be defined analogous to (3) as
$$\begin{array}{@{}rcl@{}} LR^{w}:=&\\ &\frac{\sum\limits_{t_{i}\leq \tau}{\left(\sum\limits_{j=1}^{k}{w_{EP_{j}} d_{EP_{j},i}^{I}}-\frac{n_{i}^{I}}{n_{i}^{I}+n_{i}^{C}}\sum\limits_{j=1}^{k}{w_{EP_{j}}d_{EP_{j},i}}\right)}} {\sqrt{\sum\limits_{t_{i}\leq \tau}\left(\frac{{n_{i}^{I} n_{i}^{C}\left(\left(n_{i}^{I}+n_{i}^{C}\right)\sum\limits_{j=1}^{k}{w_{EP_{j}}^{2}d_{EP_{j},i}}-\sum\limits_{j=1}^{k}{w_{EP_{j}}^{2} d^{2}_{EP_{j},i}}\right)}}{\left(n_{i}^{I}+n_{i}^{C}\right)^{2}\left(n_{i}^{I}+n_{i}^{C}-1\right)}\right)}}. \end{array} $$
Under the null hypothesis of equal weighted composite (cumulative) hazards the test statistic (11) is approximately standard normal distributed. Hence, the null hypothesis is rejected if LRw≤−z1−α, where z1−α is the corresponding (1−α)-quantile of the standard normal distribution and α is the one-sided significance level.
Note that the common weighted log-rank test can be shown to be equivalent to the Cox score test [16] because the weights are working on the coefficient β and thus the partial likelihood and its logarithm can be easily deduced. The intention of the common weighted log-rank test is to weight the time points. However, in our weight-based log-rank test, the weights have another meaning and are working on the whole hazard not only on the coefficient. Thus, the log-likelihood translates to a form were the weights are additive and therefore the score test does not translate to the test statistic proposed in this work. This was also the reason why we called our test 'weight-based' and not 'weighted' log-rank test. Our test is valid but must be interpreted as a Wald-type test statistic.
Step-by-step guidance for the choice of weights
When using the weighted all-cause hazard ratio as the efficacy effect measure for a composite endpoint it is important to fix the weights in the planning stage of the study. This can be seen as a quite challenging task, as the choice of the weights importantly influences the final outcome and the interpretation of the results. Thus, it is important to choose the weights in a well-reflected way and not arbitrarily. To help researchers with this task, we provide detailed steps on how to choose appropriate weights for a specific clinical trial situation. When discussing the choice of weights, it must be kept in mind that by using the standard all-cause hazard, i.e. the unweighted scenario, this corresponds to equal weights for all components implying that event types with a higher event frequency are naturally up-weighted. Therefore, equal weights for all components can be considered at least as arbitrary as predefined weights according to relevance considerations. To define reasonable weights, we first recall the weighted all-cause hazard function as introduced in (4)
$$\begin{array}{@{}rcl@{}} \lambda^{w}_{CE}(t)=\sum_{j=1}^{k}w_{EP_{j}}\cdot \lambda_{EP_{j}}(t), \ j=1,..,k. \end{array} $$
The weighted all-cause hazard can also be interpreted as the standard all-cause hazard based on modified cause-specific hazards \(\tilde {\lambda }_{EP_{j}}(t)\) where
$$\begin{array}{@{}rcl@{}} \tilde{\lambda}_{EP_{j}}(t):=w_{EP_{j}}\cdot \lambda_{EP_{j}}(t), \ j=1,..,k. \end{array} $$
Thus, by introducing the component weights we implicitly modify the event time distribution that is the corresponding survival function. When choosing a weight unequal to 1, the survival distribution changes its shape. For a weight larger than 1, the number of events artificially increases and as a consequence, the survival function decreases sooner. In contrast, for a weight smaller than 1 the survival distribution becomes more flat as the number of events is artificially decreased. Whereas the all-cause hazard ratio can be heavily masked by a large cause-specific hazard of a less relevant component, a more relevant component with a lower number of events can only have a meaningful influence on the composite effect measure, when it is up-weighted (or if the less relevant component is down-weighted accordingly). On the contrary, if a large cause-specific hazard is down-weighted this can result in a power loss. Therefore, weighting can improve interpretation but the effect on power can be positive or negative, depending on the data situation at hand.
In order to preserve the comparability to the unweighted all-cause hazard ratio, we recommend to fix the weight of the most important component, which is often given by 'death', to 1. All other weights should then be chosen smaller or equal to 1. When considering a weight for the most relevant component larger than 1, this results in endless possibilities and it becomes more difficult to set the weights for the other less relevant events in an adequate relation. The general recommendation of fixing all weights \(w_{EP_{j}}\leq 1,\ j=1,...,k\) is moreover reasonable because choosing a set of weights which are both - smaller and greater than 1 - can cause a situation where the weighted all-cause hazard is equivalent to the standard all-cause hazard. This is problematic because in this case we cannot differentiate if the effect is due to the weighting scheme or due to the underlying cause-specific hazards. For illustration of the latter problem, consider two event types EP1 and EP2 with exponentially distributed event times, where EP1 corresponds to the more relevant endpoint
$$\begin{array}{@{}rcl@{}} \lambda_{EP_{1}}(t)=0.2 \qquad \lambda_{EP_{2}}(t)=0.3. \end{array} $$
This leads to the standard all-cause hazard
$$\begin{array}{@{}rcl@{}} \lambda_{CE}(t)=0.2+0.3=0.5. \end{array} $$
If the weights are chosen as \(w_{EP_{1}}=1.3\) and \(w_{EP_{2}}=0.8\) the weighted cause-specific hazards are given as
$$\begin{array}{*{20}l} &\tilde\lambda_{EP_{1}}(t)=1.3\cdot 0.2=0.26 \\ &\tilde\lambda_{EP_{2}}(t)=0.8\cdot 0.3=0.24 \end{array} $$
and therefore, the weighted all-cause hazard is equivalently given by
$$\begin{array}{@{}rcl@{}} \lambda^{w}_{CE}(t)=0.26+0.24=0.5. \end{array} $$
Choosing the weights \(w_{EP_{1}}=1\) and \(w_{EP_{2}}=0.6\) gives the weighted cause-specific hazards
$$\begin{array}{*{20}l} &\tilde\lambda_{EP_{1}}(t)=1\cdot 0.2=0.2 \\ &\tilde\lambda_{EP_{2}}(t)=0.6\cdot 0.3=0.18, \end{array} $$
and therefore
$$\begin{array}{@{}rcl@{}} \lambda_{CE}^{w}(t)=0.2+0.18=0.38, \end{array} $$
where the influence of the weights is now visible. Instead of interpreting the weighted hazards, for the applied researcher it might be easier to consider the corresponding weighted composite survival function \(S_{CE}^{w}(t)\) given as
$$\begin{array}{@{}rcl@{}} S_{CE}^{w}(t)&=&exp(-\Lambda_{CE}^{w}(t))=exp\left(-\int_{0}^{t}\lambda_{CE}^{w}(x)dx\right)\\ &=&exp\left(-\int_{0}^{t} \left(\sum_{j=1}^{k}w_{EP_{j}}\cdot \lambda_{EP_{j}}(x)\right) dx\right)\\&=&exp\left(- \sum_{j=1}^{k} w_{EP_{j}}\int_{0}^{t} \left(\lambda_{EP_{j}}(x)\right) dx\right)\\ &=&exp\left(- \sum_{j=1}^{k} w_{EP_{j}}\Lambda_{EP_{j}}(t)\right)\\ &=&\prod_{j=1}^{k} exp(-w_{EP_{j}}\Lambda_{EP_{j}}(t)). \end{array} $$
It can be seen that the weights are still working multiplicatively on the cumulative cause-specific hazards and the event time distributions for the different event types are also connected multiplicatively. By the introduction of the weights we still assume that an individual can only experience one event but (for weights smaller than 1) less individuals experience the event. This means that the expected number of events decreases with a weight smaller than 1. Therefore, the weighted survival function for the composite still corresponds to a time to first event setting but with a proportion of events which is lower compared to the unweighted approach.
Comparing the graphs of the weighted and unweighted event time distributions can be a helpful tool for choosing the weights as shown in Fig. 1 for the exemplary setting discussed above. It can be seen that both weighting schemes yield a larger difference between the event time distributions when comparing the intervention versus the control, however the second weighting scheme shows the larger difference.
Event time distributions for two different weighting schemes: Scenario A: \(w_{EP_{1}}=1, w_{EP_{2}}=0.6\); Scenario B: \(w_{EP_{1}}=1, w_{EP_{2}}=0.2\)
In conclusion, we recommend to proceed as follows in order to choose the weights
Identify the clinically most relevant event type (e.g. 'death') and assign a weight of 1.
Choose the order of clinical relevance for the remaining event types. For each event type EPj you should answer the question "How many events of type EPj can be considered as equally harmful than observing one event (or any other amount of reference events) in the clinically most relevant endpoint?". For example, if in the example given above 5 events of type EP2 are considered as equally harmful as one event of EP1, then the weighting scheme proposed in Scenario B might be preferred. If instead the researcher arguments that 5 events of type EP2 are considered as equally harmful as 3 events of EP1, then the weighting scheme proposed in Scenario A should be preferred. The weights are thus mend to bring all events to the same severity scale. By assigning a weight of 1 to the most relevant event type, this event type acts as the reference event. Therefore, the weighted survival function and its summarizing measures (median survival, hazard ratio) can be interpreted as a standard survival function for the reference event. For example, if 'death' is the reference event, on a population and on an individual patient level, the weighted survival function then expresses the probability to be neither dead nor in a condition considered as equally harmful. The median weighted survival can be interpreted as the time when half of the population is either dead or in an equally harmful condition.
If there are some assumption about the form of the underlying event time distributions, then the functional form of the cause-specific hazards is known. The weighted cause-specific hazards are obtained by simple multiplication with the weighting factors. We recommend to choose several weighting scenarios and to plot the resulting weighted and unweighted event time distributions and to investigate graphically how different weights would affect the expected survival time and median survival per group. Moreover, the weighted and unweighted hazard ratio can be analytically deduced and compared. By this, the impact of the weighting scheme becomes more explicit.
Simulation scenarios
To provide a systematic comparison of the original parametric estimator \(\widehat \theta ^{w}_{CE}(t)\) to the new non-parametric estimator \(\widetilde \theta ^{w}_{CE}(t)\) for the weighted all-cause hazard ratio and in order to analyse the performance of the weight-based log-rank test compared to the originally proposed permutation test we performed a simulation study with the software R Version 3.3.3 [17].
Within our simulation study, we investigate various data scenarios for a composite endpoint composed of two components EP1 and EP2. We restrict our simulations to weights given by \(w_{EP_{1}}=1,\ w_{EP_{2}}=0.1\) or \(w_{EP_{1}}=0.1,\ w_{EP_{2}}=1\) where the two event types are thus considered to show a considerable difference in clinical relevance. The results for another less extreme weighting scheme are provided as Additional file 1 (i.e. weights 1 and 0.7). A total of 10 scenarios based on different underlying hazard functions were considered in order to mimic situations where the underlying assumptions of both approaches are fulfilled and those where they are (partly) violated. For the original parametric estimator, the cause-specific hazards were estimated by fitting Weibull models. A total of 1000 data sets each with n=200 patients (100 patients per group) were simulated for each scenario. The amount of simulated data sets was limited to 1000 because of the time-consuming runtime of the permutation test which was based on 1000 runs. We used the pseudo-random generator Mersenne Twister [18]. For simulating the underlying event times, the approach described by Bender et al. [19] was used. The minimal follow-up was either fixed to τ=1 or τ=2 year(s). For each scenario the methods were compared on the same data sets. In case of non-convergence of a model, the data set was excluded. Table 1 lists the underlying hazard functions for the different simulation scenarios and summarizes briefly which assumptions are met. In Fig. 2 the corresponding weighted and unweighted event time distributions for the composite for the intervention and the control group are graphically displayed for all 10 scenarios. In addition, the related weighted and unweighted hazard ratios for the composite are visualized along with unweighted cause-specific effects.
Event time distributions for the intervention (dashed lines) and control (solid lines) for the composite endpoint based on the unweighted (black lines) and weighted (yellow and blue lines) cause-specific hazards as well as the unweighted all-cause hazard ratio (black solid line) in comparison to the weighted all-cause hazard ratios (yellow and blue lines) and the cause-specific hazard ratios (dotted black lines)
Table 1 Investigated simulation scenarios
For Scenario 1-7 the cause-specific hazards are Weibull, or exponentially, distributed with the hazard of the form [20]
$$ \lambda(t)=\kappa\cdot \nu\cdot t^{\nu-1}. $$
Thereby, κ>0 is the scale parameter and ν>0 is the shape parameter. The investigated scenarios show to some extend the flexibility of the Weibull model. Situations with earlier occurring events for one event type (higher cause-specific hazard) and later occurring events for the other event type (lower cause-specific hazards) are capture as well as situations where the difference in hazards is smaller. In the scenarios 1-6 at least one cause-specific hazard is time-dependent whereas in Scenario 7 the hazards are constant.
The hazard for the composite increases over time for the Scenarios 1 and 3 and decreases for Scenario 2. For the Scenarios 4 and 5 the hazard first decreases and then increases after a while. For the Scenarios 1 and 3 the proportional hazards assumption is fulfilled for each of the event types simultaneously. Also note that in Scenario 3 and partly in the Scenarios 4 and 5 the effects for the event types point into opposite directions. Scenario 6 depicts a situation where no treatment effect for the individual components and the composite exists. In Scenario 7 there are opposite effects for the individual components which cancel out in the combined composite for one weighting scheme.
As we aim to quantify how robust the original parametric estimator for the weighted all-cause hazard ratio based on the Weibull model is when the event times for the components in fact do not follow a Weibull distribution, Scenarios 8 to 10 are based on a Gompertz distribution. Like the Weibull model the Gompertz model fulfils the proportional hazards assumptions where the hazard is parametrized as
$$ \lambda(t)=\kappa\cdot e^{\nu\cdot t}+\epsilon, $$
which is also referred to the Gompertz-Makeham distributed hazard [20, 21]. Again, κ>0 is a scaling parameter and ν>0 a shape parameter. In addition, a more general term ε≥−κ defining the intercept is formulated. For all Scenarios with Gompertz distributed event times the hazard for the composite increases over time. For the situation where the shape parameters are equal across all event types the proportional hazards assumption does apply to the composite. This is the case for Scenario 8 but not for the Scenarios 9 and 10. The proportional hazards assumption also holds true for each event type separately for the Scenarios 8 and 9. In Scenario 10 the proportional hazards assumption is violated for all event types and for the composite.
Table 2 displays the results of the simulation study for all Scenarios 1 to 10. Columns 2 and 3 present check boxes for the underlying model assumptions of the two estimators. Especially for those scenarios where some assumptions are violated, we are interested in the (standardized) bias, the relative efficiency, and the coverage of the corresponding confidence interval for the different estimators (see Table 3). Thereby. the bias is quantified by comparing the mean logarthmized (natural) estimators (Table 2 Columns 10 and 11) to the corresponding natural logarithm of the true effect (Table 2 Column 7) which is fixed by the simulation setting. For the standardized bias the bias is divided by the corresponding standard error of the estimated effects. The relative efficiency is the quotient of the mean square error of the original estimator divided by the mean square error of the non-parametric estimator. A relative efficiency smaller than one is in favour of the original estimator. The mean square error is the sum of the quadratic bias and the quadratic standard error for the logarithmized estimators. The coverage is the proportion of times the 95% confidence interval for each estimated effect includes the true effect. To determine the confidence intervals, the standard error for all estimated effects is required which we obtained by the permutation distribution. Thereby, again logarithmic scale is used so that the estimators' distribution is not skewed and thus their standard deviation and the performance measures in Table 3 can be interpreted. In Column 4 of Table 2 the time point τ at which the estimators are evaluated is shown and Columns 5 and 6 show the underlying component weights. Note that by switching the weights between the two components, we implicitly investigate the influence of all hazard combinations when the relevance of the components is reversed. Column 7 displays the logarithmized true weighted all-cause hazard ratio at time τ which can be obtained from the underlying cause-specific hazard functions. Columns 8 and 9 show the mean amount of events averaged over all data sets per scenario for all event types separately and its standard deviation. Columns 10 and 11 show the mean of the logarithmized estimated weighted all-cause hazard and its standard deviation based on the original parametric estimator \(\widehat {\theta }^{w}_{CE}(\tau)\) and based on the new non-parametric estimator \(\tilde {\theta }^{w}_{CE}(\tau)\). Columns 12 to 13 show the empirical power values for the originally proposed permutation test based on \(\widehat {\theta }^{w}_{CE}(\tau)\) and for the new weight-based log-rank test based on \(\tilde {\theta }^{w}_{CE}(\tau)\). Note that the reported power values correspond to one-sided tests based on a one-sided significance level of 0.025. Table 3 depicts the amount of simulations that converged (Columns 2 and 3), the bias (Columns 4 and 5), the standardized bias (Columns 6 and 7), the square root of the mean square error (Columns 8 and 9), the relative efficiency (Column 10), and the coverage (Columns 11 and 12) for the logarithmized original and new estimators.
Table 2 Simulation results
Table 3 Simulation results: Performance
Scenarios 1 and 3 reflect situations where the proportional hazards assumption is fulfilled for each component but the Weibull distributed cause-specific hazards are unequal and thus the composite effect is time-dependent. Since in this scenarios the assumptions for the original estimator is fulfilled it is intuitive that the (standardized) bias is small for the parametric estimator. Although the assumptions for the non-parametric estimator are violated the bias is still rather small. This good performance is also captured in the coverage which is mostly near the anticipated 95%. It is furthermore intuitive that the original estimator shows most often a smaller mean square error in relation to the non-parametric estimator. Note that in Scenario 3 the unweighted effects point into different directions but the direction of the weighted effect depends on the weighting scheme. In the Scenarios 2, 4, and 5 the proportional hazards assumption is not fulfilled for neither the components nor for the composite but the cause-specific hazards still follow a Weibull distribution. For Scenario 2 it can be seen that the estimated weighted effects are the same for both estimators but do not approach the true effect as good as in the Scenarios 1 and 3. This is because both approaches need at least the assumption of proportional hazards in the components. A similar outcome would be expected for the Scenarios 4 and 5. However, in both scenarios the parametric estimator performs much worse than the non-parametric estimator. This is due to the higher variability in the estimations. For Scenario 6 where there is no effect for the unweighted composite both approaches perform quite well. For the original estimator this was expected since its assumptions are fulfilled. In Scenario 7 with the weights 1 for event type 1 and 0.1 for event type 2 the true combined treatment effect is 0. This is also captured quite well in both estimators. Note that only for this specific weighting scheme the composite effect is 0 but not for the other weighting schemes. However, the performance of the estimation approaches is also satisfying for the other weighting schemes. In Scenario 8, Gompertz-Makeham distributed cause-specific hazards are assumed. Thereby, the proportional hazards assumption is fulfilled for the components and the composite. Thus, it is intuitive that the new non-parametric estimator closely coincide with the true effect. However, the parametric estimator based on the Weibull model is relevantly biased independent of the weighting scheme and shows a higher variability. Scenario 9 still depicts Gompertz-Makeham distributed cause-specific hazards but the proportional hazards assumption is only fulfilled for the components and not for the composite. Although the cause-specific baseline hazards are thus unequal the non-parametric estimator performs better in this scenario whereas the parametric estimator shows substantial bias and variability which might be also due to convergence problems. Scenario 10 represents Gompertz distributed cause-specific hazards where the proportional hazards assumption is neither fulfilled for the components nor for the composite. Compared to the two previous scenarios the performance of the parametric estimator has increased and is not globally worse than that of the non-parametric estimator. The performance depends on the weighting scheme. In here, not all τ-weight-combinations are displayed. However, the performance of the missing combination scenarios is comparable to the corresponding scenarios displayed.
In conclusion, the original parametric estimator turns out to be sensible against model miss-specifications for estimating the underlying cause-specific hazards as expressed by most values of the (standardized) bias and the coverage of the confidence intervals for Scenarios 4, 5, 8, 9, 10. In these scenarios, the performance of the non-parametric estimator tends to be better because not only the (standardized) bias is smaller and the coverage probability is better but also the relative efficiency favours the non-parametric approach. Moreover, in Scenarios 4 and 5 the (standardized) bias of the parametric estimator is smaller and its variation is considerably higher which cannot only be explained by the smaller amount of converged simulations. The higher amount of non-converging models for the original approach is furthermore a disadvantage. In scenarios where the assumption for the parametric estimator is fulfilled (Scenarios 1 and 3) its performance tends to be better than for the non-parametric approach. Although in these scenarios the assumption of equal cause-specific baseline hazards is violated, the performance of the non-parametric estimator is however not considerably worse than for the parametric estimator.
Except for Scenario 1b, the power of the weight-based log-rank test is uniformly equal or larger than the power of the permutation test. This power advantage in particular occurs in situations where the two point estimators coincide (Scenarios 2a and 2b or 10a) or even when the non-parametric estimator suggests a less extreme effect (Scenarios 8 or 9). For Scenario 6 where there is no effect for the components nor for the composite the permutation test in the investigated scenarios performs better in terms of preserving the type one error. In Scenario 7 where the composite effect is 0 for one weighting scheme the type one error is preserved for the permutation test as well as the weight-based log-rank test in this scenario.
If the weights are chosen to be 1 and 0.7, the performance comparisons basically come to the same results (compare Additional file 1). Summarizing the results of our simulation, the new non-parametric estimator and the corresponding weight-based log-rank test outperform the original estimator and the permutation test.
In this work, we investigated a new estimator and test for the weighted all-cause hazard ratio which was recently proposed by Rauch et al. [15] as an alternative effect measure to the standard all-cause hazard ratio to assess a composite time-to-event endpoint. The weighted all-cause hazard ratio as a weighted effect measure for composite endpoints is appealing because it is a natural extension of the all-cause hazard ratio. It allows to regulate the influence of event types with a greater clinical relevance and thereby eases the interpretation of the results. Generally it must be noted that the weighted all-cause hazard ratio was introduced to ease the interpretation of the effect in terms of clinical relevance. The aim of the weighted effect measure is not to decrease the sample size or increase the power. The power of the weighted all-cause hazard ratio can be larger but may also be smaller than the power of the unweighted standard approach.
The original parametric estimator proposed by Rauch et al. [15] requires the specification of a parametric survival model to estimate the cause-specific hazards. Moreover, in the original work by Rauch et al. [15] a permutation test was proposed to test the new effect measure which comes along with a high computational effort. In this work, we overcome these shortcoming by proposing a new non-parametric estimator for the weighted all-cause hazard ratio and a closed formula-based test statistic which is given by a weight-based version of the well-known log-rank test.
The simulation study performed within this work shows that the original parametric estimator is sensible to miss-specifications of the underlying cause-specific event time distribution. If there are uncertainties about the underlying parametric model for the identification of the cause-specific hazards we therefore recommend to use the new non-parametric estimator. In fact, the new non-parametric estimator proposed in this work turns out to be more robust even if the required assumption of equal cause-specific baseline hazards is not met. The relative efficiency as well as the coverage depict also that the performance of the non-parametric estimator is in most cases at least as good as the original parametric estimator. Additionally, in our scenarios convergence problems arose more often when using the parametric estimator. This problems in convergence arose in scenarios where the effect of one event type was either very high at the beginning of the observational period or there was nearly no effect at the end of the observational period where the survival function reaches 0. Moreover, the simulation study shows that the new weight-based log-rank test results in considerably better power properties than the originally proposed permutation test in almost all investigated scenarios. In some scenarios the type one error might not be preserved and it has to be further investigated in which this is exactly the case and how it can be addressed. In addition, the weight-based log-rank test is computationally much less expensive. However, one remaining restriction is that confidence intervals cannot be directly provided because the testing procedure is not equivalent to the Cox score test. The only possibility to provide confidence intervals for the weighted hazard ratio would be by means of bootstrapping techniques.
Apart from investigating the performance of the point estimator and the related statistical test, we additionally provide a step-by-step guidance on how to choose the relevance weights for the individual components in the planning stage. It is often criticized that the choice of relevance weights in a weighted effect measure is to a certain extend arbitrary. By applying our step-by-step guidance for the choice of weights, this criticism can be addressed. To be concrete, we propose to choose a weight of 1 for the clinically most relevant component and to choose weights smaller or equal to 1 for all other components by judging how many events of a certain type would be considered as equally harmful than an event in the most relevant component. Using this approach for defining the weights, comparability to the unweighted approach is given and the most relevant event serves as a reference. When the shape of the different event time distributions is known in the planning stage, we also recommend to look at the plots of the weighted and unweighted event time distributions for different weight constellations to visually inspect the influence of the weight choice on the shape of the survival curves and on the treatment effect.
In conclusion, we recommend to use the new non-parametric estimator along with the weight-based log-rank test to assess the weighted all-cause hazard ratio. When applying the weighting scheme proposed within our step-by-step guidance, the choice of the weights can be motivated with reasonable clinical knowledge. With the results from this work, the weighted average hazard ratio therefore becomes a very attractive new effect measure for clinical trials with composite endpoints.
Simulated data and R programs can be obtained from the authors upon request.
Cox DR. Regression models and life-tables. J Royal Stat Soc Ser B (Methodol). 1972; 34(2):187–220.
Lubsen J, Kirwan BA. Combined endpoints: can we use them?Stat Med. 2002; 21(19):2959–7290.
U.S. Department of Health and Human Services. Food and Drug Administration, Center for Drug Evaluation and Research (CDER), Center for Biologics Evaluation and Research (CBER), ICH. Guidance for Industry: E9 Statistical Principles for Clinical Trials. 1998. http://www.fda.gov/downloads/drugs/guidancecomplianceregulatoryinformation/guidances/ucm073137.pdf. Accessed 23 Aug 2017.
Rauch G, Beyersmann J. Planning and evaluating clinical trials with composite time-to-first-event endpoints in a competing risk framework. Stat Med. 2013; 32(21):3595–608.
Bethel MA, Holman R, Haffner SM, Califf RM, Huntsman-Labed A, Hua TA, Murray J. Determing the most appropriate components for a composite clinical trial outcome. Am Heart J. 2008; 156(4):633–40.
Freemantle N, Calvert M. Composite and surrogate outcomes in randomised controlled trials. Am Heart J. 2007; 334(1):756–7.
Freemantle N, Calvert M, Wood J, Eastaugh J, Griffin C. Composite outcomes in randomized trials - greater precision but with greater uncertainty?J Am Med Assoc. 2003; 289(19):756–7.
Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen. General methods - version 5.0. 2017. https://www.iqwig.de/download/allgemeine-methoden_version-5-0.pdf. accessed 23 Aug 2017.
Pocock SJ, Ariti CA, Collier TJ, Wang D. The win ratio: a new approach to the analysis of composite endpoints in clinical trials based on clinical priorities. Eur Heart J. 2012; 33(2):176–82.
Buyse M. Generalized pairwise comparisons of prioritized outcomes in the two-sample problem. Stat Med. 2010; 29(30):3245–57.
Péron J, Buyse M, Ozenne B, Roche L, Roy P. An extension of generalized pairwise comparisons for prioritized outcomes in the presence of censoring. Stat Methods Med Res. 2016; 27(4):1230–9.
Lachin JM, Bebu I. Application of the wei lachin multivariate one-directional test to multiple event-time outcomes. Clin Trials. 2015; 12(6):627–33.
Bebu I, Lachin JM. Large sample inference of a win ratio analysis of a composite outcome based on prioritized outcomes. Biostatistics. 2016; 17(1):178–87.
Rauch G, Jahn-Eimermacher A, Brannath W, Kieser M. Opportunities and challenges of combined effect measures based on prioritized outcomes. Stat Med. 2014; 33(7):1104–20.
Rauch G, Kunzmann K, Kieser M, Wegscheider K, Koenig J, Eulenburg C. A weighted combined effect measure for the analysis of a composite time-to-first-event endpoint with components of different clinical relevance. Stat Med. 2018; 37(5):749–67.
Lin RS, León LF. Estimation of treatment effects in weighted log-rank tests. Contemp Clin Trials Commun. 2017; 8(1):147–55.
R Core Team. R: A language and environment for statistical computing. 2018. https://www.r-project.org/. 2017, Version 3.3.3.
Matsumoto M, Nishimura T. Mersenne twister. a 623-dimensionally equidistributed uniform pseudorandom number generator. ACM Trans Model Comput Simul. 1998; 8(1):3–30.
Bender R, Augustin T, M B. Generating survival times to simulate cox proportional hazards models. Stat Med. 2005; 24(11):1713–23.
Kleinbaum DG, Klein M. Survival Analysis, A Self-Learning Text, Third Edition. New York: Springer; 2012.
Pletcher SD. Model fitting and hypothesis testing for age-specific mortality data. J Evolution Biol. 1999; 12(3):430–9.
This work was supported by the German Research Foundation (Grant RA 2347/1-2). The German Research Foundation had no influence on any of the research, i.e. study design, analysis, interpretation, or writing, done in this article.
Institute of Medical Biometry and Epidemiology, University Medical Center Hamburg-Eppendorf, Martinistraße 52, Hamburg, 20246, Germany
Ann-Kathrin Ozga
Charité - Universitätsmedizin Berlin, corporate member of Freie Universität Berlin, Humboldt-Universität zu Berlin, Institute of Biometry and Clinical Epidemiology, Charitéplatz 1, Berlin, 10117, Germany
Geraldine Rauch
Berlin Institute of Health (BIH), Anna-Louisa-Karsch 2, Berlin, 10178, Germany
Correspondence to Ann-Kathrin Ozga.
The additional file contains further simulation results with the distributions described in this work but other weighting schemes.(PDF 172 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Ozga, AK., Rauch, G. Introducing a new estimator and test for the weighted all-cause hazard ratio. BMC Med Res Methodol 19, 118 (2019). https://doi.org/10.1186/s12874-019-0765-1
Composite endpoint
Weighted effect measure
Weight-based log-rank test
Simulation study
|
CommonCrawl
|
Instagram Background Crack Keygen Full Version PC/Windows [2022-Latest]
Instagram Background Crack Free Download For Windows
– Instantly change the background of your Instagram account
– Change your Instagram background in a jiffy
– Beautiful wallpapers and other delightful aspects
– Choose from a large selection of wallpapers
– Change your Instagram background whenever you want
– Lovely Instagram backgrounds
– No more annoying ads when you log in
– Do you have something to say? We would love to hear your thoughts, so feel free to give us feedback on Google Form!
57 So.2d 765 (1952)
WILCHER
Supreme Court of Florida, en Banc.
Geo. B. Hodges, Gainesville, for petitioner.
Richard W. Ervin, Atty. Gen., and St. John Rivers, Asst. Atty. Gen., for respondent.
SEBRING, Justice.
The petitioner appeals from an order dismissing his application for a writ of habeas corpus based upon alleged infringement of his constitutional rights in the enforcement of a sentence to which he had committed himself upon conviction of petit larceny. The lower court, after reviewing the testimony heard by the Court of Criminal Appeals when he granted and overruled the petition for the writ, concluded the issues raised by the application could be decided on the basis of the record in the trial court. Accordingly, the order appealed from is reversed and the cause remanded to the lower court with instructions to make an evidentiary determination of the validity of the factual matters alleged in the petition and the manner in which they affect the petitioner's asserted constitutional right.
The facts are not in dispute: Petitioner was convicted of petit larceny and sentenced to two years in the state prison. From the record it appears that while being incarcerated in a federal penal institution at Atlanta, Georgia, petitioner requested Federal authorities to send him a Texas attorney to represent him in a Federal writ of habeas corpus. Upon being notified of petitioner's request, the officials of the prison forwarded a letter to the Federal authorities, at San Antonio, Texas, in the official mail, indicating petitioner's request and asking that they be informed of the outcome of petitioner's request. The letter was sent in the regular course of prison mail. It arrived at the Federal courthouse in San Antonio, Texas, in due time. However, it was marked "Returned to sender", after which it was returned to the prison authorities and lodged in the prison mail room. For several weeks thereafter the petitioner complained
Instagram Background Crack + License Keygen
Create something with thousands of bright, beautiful pictures
Capture an image, then you can choose one from the collection
It's quick and simple, it's free to download and use
And best of all, no ads or requirements.
InstaBg is a simple, free photo editor that lets you manipulate the photos in your Instagram feed without any special coding or added software. It's easy to use and supports a variety of effects, which makes it a great alternative for those who want to check out the different styles and filters their followers have created for their photos.
InstagBg has been running for a while and now has over 2 million active users, which indicates its popularity level. It's a simple extension that anyone can use to easily spice up their Instagram feed with a fresh, new image every day. To use InstaBg, just open a Chrome tab and go to instabg.me.
The creator of this extension is MangoApps Inc., the same development team that brought us The Midnight Foto, VSCO Cam, and Shelfie.
Instagram Background For Windows 10 Crack Description:
The #1 wallpaper app for Instagram on Android and iOS is back with BTS wallpaper!
BE BACK!
A lot has happened since the last time we released the wallpaper app for Instagram.
Instagram has redesigned its profile.
A new version of the app has been released.
And you can now get our app directly from the Google Play Store.
Android users can get BTS wallpaper for Instagram from the Google Play Store without using the official Instagram app!
Instagram App for Android + iOS –
One key reason why we made the release of a new app for Instagram was to offer users a way to use wallpapers in the social network without having to use a different app.
So what was the case before?
We were the first (and only) app to enable Instagram users to download wallpapers.
However, if you weren't a fan of the app, or you had some reason why you couldn't use it, that was the end of your ability to use wallpapers for Instagram.
We are proud to be the first to offer a solution that will keep users' wallpapers and their Instagram experience in sync.
InstaBg, the wallpaper
Instagram Background Crack+ Activator
– Simple to use
– Ability to change back to the original background
The fast and easy way to change Instagram background to one of your choice.Q:
Spectrum of continuous linear map is closed
Let $X$ be a Banach space.
Let $T \in B(X)$ be a continuous linear map.
I want to show that the spectrum of $T$ is closed.
I know that if $T$ is compact, the spectrum is closed, but that is not an assumption in the problem.
Hint. If $0$ is an isolated point of $\sigma(T)$, then there exists a non-zero $x\in X$ such that $\left\|T^nx\right\|\rightarrow 0$ as $n\rightarrow\infty$.
Every continuous linear map $T$ is a compact operator (in the normed topology).
By the spectral theorem,
\sigma(T)=\sigma(T)\cap\mathbb C^*=\sigma(T)\cap(0,\infty).
By a theorem of Carathéodory, the union of the boundary of a closed set is closed. So, $\sigma(T)$ is closed.
What's New in the Instagram Background?
In conclusion, that's all I have about Instagram Background for now. If you have anything to add, if there are any features that you like about the extension, or anything else, be sure to leave us a comment below.
It is understandable to always wish to have a beautiful desktop. But do you know you can create a desktop with your own picture? Yes, its true. We can quickly change desktop background on Windows 7, 8, 8.1 and Windows 10 from your PC to your personal picture.
Simply do follow the following steps:
First of all, please make sure that you update all the required software that are necessary for this task (where you want to have your desktop).
– For Windows 7, please download this tool through the link provided below.
– For Windows 8 or Windows 8.1, please download this tool through the link provided below.
If you cannot find anything, you can get the software from the link provided below.
– Note: The free version cannot run more than one picture at the same time.
Having the software ready, we move on to the actual process.
Steps To Set Windows 7 Desktop Background
Step 1 – Firstly, click on the "Windows Start button". Then move the cursor to "Control Panel".
Step 2 – Now, you can go to "Change Desktop Background".
Step 3 – Now, click on the "Change Desktop Background" tab.
Step 4 – Now, you can choose the picture you want to set as the desktop wallpaper.
Step 5 – Click on the "Browse" button. Now, a new window will open.
Step 6 – Then select the picture you want to set as your desktop wallpaper.
Step 7 – Now, click on "Set as Desktop Background".
That's it! You are done with the setting process.
How to Set Windows 8 Desktop Background?
The steps in order to change desktop background are quite similar, except for the software that is required. Here are the steps:
Step 1 – Open the "Control Panel".
Step 2 – Then click on "Device Stage".
Step 3 – Now, click on "Change Background".
Step 4 – A new window will open.
https://wakelet.com/wake/2JJURlZ7FuCTIcXXxFP7J
https://wakelet.com/wake/MrdQHchYJE-PWnkT_A-Mq
https://wakelet.com/wake/YRXmo0_KEbhXjHghha33N
https://wakelet.com/wake/G5KXEYmvPXevp059Joerq
https://wakelet.com/wake/IQeAKvHl1S5zciJzTvK9J
System Requirements For Instagram Background:
* Windows 7/8, Windows 10, or Mac OS X El Capitan 10.11.x (v 10.11.4, 13C32, 14A35), 10.11.5, 10.12.x (v 10.12.4, 14C32, 15A39), or 10.13.x (v 10.13.1, 15B44)
* NVIDIA GPUs with support for CUDA 2.0
* AMD GPU with support for AMD APP or AMD Catalyst (v 19.3 or later)
https://gobigup.com/internet-password-recovery-wizard-crack-patch-with-serial-key/
https://instafede.com/info-txt-generator-download-pc-windows-latest/
https://klassenispil.dk/backupery-for-evernote-crack-patch-with-serial-key-pc-windows-2022/
https://thebakersavenue.com/wx-smart-menu-mac-win-updated-2022/
https://npcfmc.com/screen-saver-construction-set-crack-download-april-2022/
http://www.male-blog.com/2022/07/13/chatterino-1-6-514-crack-activator-free-for-windows-latest-2022/
https://healinghillary.com/countdown-to-wwe-survivor-series-crack-for-pc-final-2022/
http://www.rathisteelindustries.com/dj-eq-crack-keygen/
http://yogaapaia.it/archives/46457
http://daniel-group.net/?p=5513
https://romans12-2.org/file-time-browser-crack-free-registration-code-mac-win/
http://www.tenutacostarossa.it/sudoku-puzzle-crack-with-product-key-pcwindows/
http://westghostproductions.com/?p=9885
http://diamondtoolusa.com/search-and-replace-regular-expression-wizard-crack/
https://www.theblender.it/timer-crack-download-x64-updated-2022/
Метки: Instagram Background
QuickPad Serial Key Free For PC (April-2022)
Soft Lock Full Product Key [Latest] 🤟🏼
|
CommonCrawl
|
Pipe Stream
Picture by Nevit via Wikimedia Commons
Your hometown has hired some contractors – including you! – to manage its municipal pipe network. They built the network, at great expense, to supply Flubber to every home in town. Unfortunately, nobody has found a use for Flubber yet, but never mind. It was a Flubber network or a fire department, and honestly, houses burn down so rarely, a fire department hardly seems necessary.
In the possible event that somebody somewhere decides they want some Flubber, they would like to know how quickly it will flow through the pipes. Measuring its rate of flow is your job.
You have access to one of the pipes connected to the network. The pipe is $l$ meters long, and you can start the flow of Flubber through this pipe at a time of your choosing. You know that it flows with a constant real-valued speed, which is at least $v_1$ meters/second and at most $v_2$ meters/second. You want to estimate this speed with an absolute error of at most $\frac{t}{2}$ meters/second.
Unfortunately, the pipe is opaque, so the only thing you can do is to knock on the pipe at any point along its length, that is, in the closed real-valued range $[0,l]$. Listening to the sound of the knock will tell you whether or not the Flubber has reached that point. You are not infinitely fast. Your first knock must be at least $s$ seconds after starting the flow, and there must be at least $s$ seconds between knocks.
Determine a strategy that will require the fewest knocks, in the worst case, to estimate how fast the Flubber is flowing. Note that in some cases the desired estimation might be impossible (for example, if the Flubber reaches the end of the pipe too quickly).
The input consists of multiple test cases. The first line of input contains an integer $c$ ($1 \leq c \leq 100$), the number of test cases. Each of the next $c$ lines describes one test case. Each test case contains the five integers $l$, $v_1$, $v_2$, $t$ and $s$ ($1 \leq l, v_1, v_2, t, s \leq 10^9$ and $v_1 < v_2$), which are described above.
For each test case, display the minimal number of knocks required to estimate the flow speed in the worst case. If it might be impossible to measure the flow speed accurately enough, display impossible instead.
Submit Stats
Problem ID: pipe
CPU Time limit: 1 second
Memory limit: 1024 MB
Difficulty: 7.1
Sample data files
Source: International Collegiate Programming Contest (ACM-ICPC) World Finals 2015
License: Restricted, used with permission
|
CommonCrawl
|
Preface to the special issue dedicated to James Montaldi
Alain Albouy 1, and Holger R. Dullin 2,
IMCCE, UMR8028, Observatoire de Paris, 77 avenue Denfert-Rochereau, 75014 Paris, France
School of Mathematics and Statistics, University of Sydney, Sydney NSW 2006, Australia
Dedicated to James Montaldi
Received September 2019 Revised December 2019 Published March 2020
The classical equations of the Newtonian 3-body problem do not only define the familiar 3-dimensional motions. The dimension of the motion may also be 4, and cannot be higher. We prove that in dimension 4, for three arbitrary positive masses, and for an arbitrary value (of rank 4) of the angular momentum, the energy possesses a minimum, which corresponds to a motion of relative equilibrium which is Lyapunov stable when considered as an equilibrium of the reduced problem. The nearby motions are nonsingular and bounded for all time. We also describe the full family of relative equilibria, and show that its image by the energy-momentum map presents cusps and other interesting features.
Keywords: 3-body problem, symplectic symmetry reduction, Lyapunov stability.
Mathematics Subject Classification: 37N05, 70F10, 70F15, 70H33, 53D20.
Citation: Alain Albouy, Holger R. Dullin. Relative equilibria of the 3-body problem in $ \mathbb{R}^4 $. Journal of Geometric Mechanics, 2020, 12 (3) : 323-341. doi: 10.3934/jgm.2020012
A. Albouy, Mutual distances in celestial mechanics, Lectures at Nankai institute, Tianjin, China, preprint, (2004). Google Scholar
A. Albouy, H. E. Cabral and A. A. Santos, Some problems on the classical $n$-body problem, Celestial Mechanics and Dynamical Astronomy, 113 (2012), 369-375. doi: 10.1007/s10569-012-9431-1. Google Scholar
A. Albouy and A. Chenciner, Le problème des $n$ corps et les distances mutuelles, Invent. Math., 131 (1998), 151-184. doi: 10.1007/s002220050200. Google Scholar
A. Chenciner, The angular momentum of a relative equilibrium, Discrete Contin. Dyn. Syst., 33 (2013), 1033-1047. doi: 10.3934/dcds.2013.33.1033. Google Scholar
A. Chenciner and H. Jiménez-Pérez, Angular momentum and Horn's problem, Mosc. Math. J., 13 (2013), 621–630,737. doi: 10.17323/1609-4514-2013-13-4-621-630. Google Scholar
H. R. Dullin, The Lie-Poisson structure of the reduced $n$-body problem, Nonlinearity, 26 (2013), 1565-1579. doi: 10.1088/0951-7715/26/6/1565. Google Scholar
H. R. Dullin and J. Scheurle, Symmetry reduction of the 3-body problem in $R^4$, J. Geom. Mech., 12 2020, 377-394. doi: 10.3934/jgm.2020011. Google Scholar
M. Herman, Some open problems in dynamical systems, Doc. Math., 2 (1998), 797-808. Google Scholar
J. L. Lagrange, Méchanique Analitique, Paris, 1788. Google Scholar
R. Moeckel, Minimal energy configurations of gravitationally interacting rigid bodies, Celestial Mechanics and Dynamical Astronomy, 128 (2017), 3-18. doi: 10.1007/s10569-016-9743-7. Google Scholar
D. J. Scheeres, Minimum energy configurations in the $N$-body problem and the celestial mechanics of granular systems, Celestial Mechanics and Dynamical Astronomy, 113 (2012), 291-320. doi: 10.1007/s10569-012-9416-0. Google Scholar
K. F. Sundman, Mémoire sur le problème des trois corps, Acta mathematica, 36 (1913), 105-179. doi: 10.1007/BF02422379. Google Scholar
[13] A. Wintner, The Analytical Foundations of Celestial Mechanics, Princeton Mathematical Series, 5. Princeton University Press, Princeton, N. J., 1941. Google Scholar
Figure 2. Three distinct masses $ (m_1, m_2, m_3) = (3, 2, 1)/6 $. The Lagrange equilateral family is the vertical line, not extending all the way to $ k = 1/4 $. The three non-equilateral families emerge from Euler's collinear configurations at $ k = 0 $ and for $ h \to -\infty $ approach collision configurations. The two short families have a cusp each. The long family emerges at the Euler collinear configuration with the smallest energy, then touches the endpoint of the equilateral family, and then is tangent to the maximum at $ k = 1/4 $. Past this tangency it corresponds to minimal energy at fixed $ k $ and hence is non-linearly stable
Figure 3. Equal mass case. The Lagrange equilateral family is the vertical line, in this case extending all the way to $ k = 1/4 $. The three isosceles families coincide and emerge at the collinear Euler solution at $ k = 0 $. Isosceles triangles with $ \rho < 1 $ are non-linearly stable because they are minima in the energy for fixed $ k $
Figure 4. Two equal masses, third mass smaller ($ \mu = 1/2 $). The Lagrange equilateral family is the vertical line, not extending all the way to $ k = 1/4 $. At the endpoint it meets the isosceles long family, which later touches $ k = 1/4 $. Both short families of asymmetric triangles emerge from a collinear Euler configuration at $ k = 0 $ and have a cusp beyond which $ h $ approaches $ -\infty $. The isosceles configurations to the left of the tangency with $ k = 1/4 $ are the absolute minimum of the energy and hence are non-linearly stable
Figure 5. Two equal masses, third mass bigger ($ \mu = 2 $). The Lagrange equilateral family is the vertical line, not extending all the way to $ k = 1/4 $. At the endpoint it meets the long isosceles family (red), which later touches $ k = 1/4 $. One short family (green) of asymmetric triangles emerges from a collinear Euler configuration at $ k = 0 $, has a cusp tangent to the long family and retraces itself back down. The other short family (blue) of asymmetric triangles starts and finishes at the collision where $ h \to -\infty $, and has a cusp tangent to the long family. These asymmetric triangles of absolute minimal energy are non-linearly stable. There is a tiny part of the long family of symmetric isosceles triangles which have absolute minimal energy and hence are non-linearly stable
Figure 6. Three distinct masses, somewhat close to the two isosceles cases. Left: masses $ (12, 5, 4)/21 $, Right: masses $ (6, 5, 2)/13 $. The left figure illustrates that there is no continuity in the balanced families when perturbing from the case with two equal masses and the third mass larger than the equal ones, compare Fig. 5
Figure 1. Three smooth families of balanced configurations. Long family red, short families blue and green. Masses $ (m_1, m_2, m_3) = (3, 2, 1)/6 $. Isosceles shapes are shown as dashed blue lines. Left: $ a(b) $ for $ c = 1 $, the long family exists for all values of $ b $. Right: The extended triangle of shapes $ I = const $ with boundary black dashed where one side length vanishes. The thick black ellipse marks shapes with area $ A = 0 $ with contour lines of constant positive area inside. The other set of contour lines indicate $ V = const $. Special points are marked by their projective triple $ [a, b, c] $
Gloria Paoli, Gianpaolo Piscitelli, Rossanno Sannipoli. A stability result for the Steklov Laplacian Eigenvalue Problem with a spherical obstacle. Communications on Pure & Applied Analysis, 2021, 20 (1) : 145-158. doi: 10.3934/cpaa.2020261
Tomáš Smejkal, Jiří Mikyška, Jaromír Kukal. Comparison of modern heuristics on solving the phase stability testing problem. Discrete & Continuous Dynamical Systems - S, 2021, 14 (3) : 1161-1180. doi: 10.3934/dcdss.2020227
Lekbir Afraites, Chorouk Masnaoui, Mourad Nachaoui. Shape optimization method for an inverse geometric source problem and stability at critical shape. Discrete & Continuous Dynamical Systems - S, 2021 doi: 10.3934/dcdss.2021006
Yang Liu. Global existence and exponential decay of strong solutions to the cauchy problem of 3D density-dependent Navier-Stokes equations with vacuum. Discrete & Continuous Dynamical Systems - B, 2021, 26 (3) : 1291-1303. doi: 10.3934/dcdsb.2020163
Lars Grüne. Computing Lyapunov functions using deep neural networks. Journal of Computational Dynamics, 2020 doi: 10.3934/jcd.2021006
Peter Giesl, Sigurdur Hafstein. System specific triangulations for the construction of CPA Lyapunov functions. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020378
Anh Tuan Duong, Phuong Le, Nhu Thang Nguyen. Symmetry and nonexistence results for a fractional Choquard equation with weights. Discrete & Continuous Dynamical Systems - A, 2021, 41 (2) : 489-505. doi: 10.3934/dcds.2020265
Lucio Damascelli, Filomena Pacella. Sectional symmetry of solutions of elliptic systems in cylindrical domains. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3305-3325. doi: 10.3934/dcds.2020045
Ali Wehbe, Rayan Nasser, Nahla Noun. Stability of N-D transmission problem in viscoelasticity with localized Kelvin-Voigt damping under different types of geometric conditions. Mathematical Control & Related Fields, 2020 doi: 10.3934/mcrf.2020050
Michiyuki Watanabe. Inverse $N$-body scattering with the time-dependent hartree-fock approximation. Inverse Problems & Imaging, , () : -. doi: 10.3934/ipi.2021002
Shuang Chen, Jinqiao Duan, Ji Li. Effective reduction of a three-dimensional circadian oscillator model. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020349
Annegret Glitzky, Matthias Liero, Grigor Nika. Dimension reduction of thermistor models for large-area organic light-emitting diodes. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020460
Lihong Zhang, Wenwen Hou, Bashir Ahmad, Guotao Wang. Radial symmetry for logarithmic Choquard equation involving a generalized tempered fractional $ p $-Laplacian. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020445
Tien-Yu Lin, Bhaba R. Sarker, Chien-Jui Lin. An optimal setup cost reduction and lot size for economic production quantity model with imperfect quality and quantity discounts. Journal of Industrial & Management Optimization, 2021, 17 (1) : 467-484. doi: 10.3934/jimo.2020043
Skyler Simmons. Stability of broucke's isosceles orbit. Discrete & Continuous Dynamical Systems - A, 2021 doi: 10.3934/dcds.2021015
Harrison Bray. Ergodicity of Bowen–Margulis measure for the Benoist 3-manifolds. Journal of Modern Dynamics, 2020, 16: 305-329. doi: 10.3934/jmd.2020011
Alain Albouy Holger R. Dullin
|
CommonCrawl
|
Using force data to self-pace an instrumented treadmill and measure self-selected walking speed
Seungmoon Song1,
Hojung Choi1 &
Steven H. Collins1
Self-selected speed is an important functional index of walking. A self-pacing controller that reliably matches walking speed without additional hardware can be useful for measuring self-selected speed in a treadmill-based laboratory.
We adapted a previously proposed self-pacing controller for force-instrumented treadmills and validated its use for measuring self-selected speeds. We first evaluated the controller's estimation of subject speed and position from the force-plates by comparing it to those from motion capture data. We then compared five tests of self-selected speed. Ten healthy adults completed a standard 10-meter walk test, a 150-meter walk test, a commonly used manual treadmill speed selection test, a two-minute self-paced treadmill test, and a 150-meter self-paced treadmill test. In each case, subjects were instructed to walk at or select their comfortable speed. We also assessed the time taken for a trial and a survey on comfort and ease of choosing a speed in all the tests.
The self-pacing algorithm estimated subject speed and position accurately, with root mean square differences compared to motion capture of 0.023 m s −1 and 0.014 m, respectively. Self-selected speeds from both self-paced treadmill tests correlated well with those from the 10-meter walk test (R>0.93,p<1×10−13). Subjects walked slower on average in the self-paced treadmill tests (1.23±0.27 ms−1) than in the 10-meter walk test (1.32±0.18 ms−1) but the speed differences within subjects were consistent. These correlations and walking speeds are comparable to those from the manual treadmill speed selection test (R=0.89,p=3×10−11;1.18±0.24 ms−1). Comfort and ease of speed selection were similar in the self-paced tests and the manual speed selection test, but the self-paced tests required only about a third of the time to complete. Our results demonstrate that these self-paced treadmill tests can be a strong alternative to the commonly used manual treadmill speed selection test.
The self-paced force-instrumented treadmill well adapts to subject walking speed and reliably measures self-selected walking speeds. We provide the self-pacing software to facilitate use by gait researchers and clinicians.
Self-selected walking speed is one of the main performance indices of walking. It is the speed at which people normally choose to walk and is also known as preferred speed or comfortable speed. Walking speed determines the time required in achieving the primary goal of walking: getting to a destination. Healthy adults normally choose to walk at about 1.3 m s −1 although they can walk much faster (>2.0 ms−1) [1]. Normal walking speed likely results from balancing many factors, including energy use, time spent in transit, appearance, and comfort. It has often been observed that self-selected walking speed is close to the speed that minimizes metabolic energy consumption [2, 3] or muscle fatigue [4] in traveling a unit distance. Self-selected walking speed also has been emphasized as a promising measure to assess physical health. For example, walking speed is a good predictor of health status and survival rate in older adults [5, 6] and a useful measure for rehabilitation progress [7].
There are different ways to measure self-selected walking speeds. A standard method commonly used in physical therapy and gait studies is the so-called 10-meter walk test [8, 9]. In a 10-meter walk test, subjects are instructed to walk at their comfortable speed across a 15∼20 m walkway, and the time taken to traverse the middle 10 m section is measured with a stopwatch to calculate self-selected walking speed. This process is often conducted multiple times then averaged for reliable measurements. Another common way of measuring self-selected speed is by asking subjects to manually select their comfortable speeds while walking on a treadmill that changes from slow to fast or fast to slow speeds [10–12]. Measuring comfortable speeds on a treadmill is useful for certain cases, such as collecting data in a treadmill-based gait laboratory [13] and studying assistive technologies with immobile systems [14]. On the other hand, this manual selection process requires the subjects to walk at various speeds, which can be time consuming, and to consciously distinguish comfortable from uncomfortable treadmill speeds, which can be confusing for those who are not familiar with walking on a treadmill.
Self-paced treadmills can also be useful in measuring walking speed. A treadmill that can seamlessly adapt to a subject's walking speed can provide an overground-like walking environment and can compensate for shortcomings in the manual speed selection approach. Self-pacing controllers typically consist of two parts, usually treated independently. The first estimates the subject's speed and position. The second controls treadmill speed based on the estimation. The treadmill speed is typically controlled to match subject speed and to keep the subject in the middle of the treadmill [15, 16]. Various approaches of estimating subject speed and position have been used. One approach is to use a marker-based optical motion capture system [16–18], which is widely used in research laboratories as a part of a commercial virtual reality package [19]. Researchers have evaluated these motion capture based self-paced treadmills by comparing kinematic and kinetic gait features collected on the self-paced treadmill to those during fixed speed treadmill walking [16] and overground walking [18]. In addition, these self-paced treadmills have been used in rehabilitation research for children with cerebral palsy [20, 21], individuals with chronic stroke [22], and individuals with transtibial amputation [23]. Other approaches with low-cost sensors or simpler hardware have been proposed as well, such as using a marker-free infrared-based motion sensor [24], an ultrasonic distance sensor [25], a harness with force sensors [26], and force plates on an instrumented treadmill [15].
A self-pacing controller using force-plate data from an instrumented treadmill is attractive because it does not require additional hardware or instrumentation. Feasel and colleagues [15] have proposed such a controller and used it to separately control the belts on a split-belt treadmill for asymmetric gait. They calculated the ground reaction forces and center of pressure from the force-plate data and combined them with a Kalman filter to track walking speed. The study focused on testing the feasibility of improving gait symmetry in hemiparetic patients with a virtual environment that integrated the self-paced treadmill and a visual scene. Although they reported that the hemiparetic patients self-selected to walk at speeds comparable to their overground speeds, a more thorough evaluation of self-selected walking speed on this type of self-paced treadmill would improve our understanding of its efficacy.
Various aspects of a walking speed test protocol can unexpectedly affect gait and self-selected walking speed. For example, the treadmill speed controller can induce changes in gait. The mechanics of walking on a treadmill that moves at a constant speed are identical to overground walking. However, when the treadmill accelerates, the belt reference frame is no longer equivalent to a fixed-ground reference [27]. In fact, some belt speed control dynamics can lead subjects to walk at speeds far from their preferred over-ground speed [28]. People may also choose different speeds for different walking tasks, such as to walk for a preset time or a preset distance. If people wish to minimize their energy cost in the fixed distance task, they should walk at a speed close to their normal overground speed. In order to minimize effort in the fixed time task, however, they should walk very slowly or even stand still [3]. Then again, people might not be familiar with the implications of a fixed-time walking task, or might place higher weights on comfort or appearance, or might use a heuristic that defaults to a typical speed in both tasks. The specifics of the task, such as the target distance, may also affect walking speed [29, 30]. People may also change their walking speed in response to other contextual variations, such as the visual environment [31, 32] or auditory cues [33]. Even the details of the verbal instructions provided to participants can have a strong effect on walking speed [34]. Therefore, it is important to validate the self-selected speed test protocol of interest.
A straightforward way of validating a self-selected walking speed test is to compare its measured speeds to those from the standard walking speed test. However, only a few studies have thoroughly compared walking speed on a self-paced treadmill to that during overground walking, and most of those studies were for a motion capture based commercial self-paced treadmill [18, 35]. Van der Krogt and colleagues [35] compared self-selected speeds of typically developing children and children with cerebral palsy in outdoor walking, overground walking in a lab, and walking on a self-paced treadmill in a virtual environment. Children were instructed to "walk at their own preferred, comfortable walking speed." Both groups of children walked the fastest outdoor, about 5% slower in the lab, and about 10% slower on the self-paced treadmill. Similarly, Plotnik and colleagues [18] compared self-selected speeds in healthy adults during walking for 96 m overground, on a self-paced treadmill, and on a self-paced treadmill with a virtual environment. Subjects were instructed to "walk at their own self-selected preferred comfortable speed." Subjects walked on the self-paced treadmill at speeds comparable to their overground speeds, while they walked slightly faster when a virtual environment was presented. In addition, walking speed converged faster to steady speed with the virtual environment. These tests demonstrate the value of characterizing response to a self-paced treadmill prior to using it to evaluate the effects of other interventions on self-selected walking speed.
Here, we adapt the force-based self-paced treadmill controller proposed by Feasel and colleagues [15] and evaluate two self-selected walking speed tests using it. First, we explain how the proposed self-pacing controller estimates subject speed and position and adjusts the treadmill speed. Then, we evaluate the speed and position estimations of our controller by comparing them with motion capture data. We then validate the use of the self-paced treadmill for measuring self-selected walking speed. We compare self-selected walking speeds measured from five different speed tests: the standard 10-meter overground walk test, a 150-meter overground walk test, a commonly used manual speed selection treadmill test, a 2-minute self-paced treadmill test, and a 150-meter self-paced treadmill test where subjects can see their goal and progress on a monitor. We compare self-selected walking speed in the 10 m and 150 m overground conditions to test whether the standard measure well represents speeds in longer bouts of walking. We validate the self-paced treadmill tests by evaluating how well they correlate with the standard measure and by comparing them to the commonly used treadmill test. The 2-minute and 150-meter self-paced treadmill tests are compared to each other to examine whether it is necessary to explicitly motivate subjects to walk at their typical speeds by setting target distance and showing their progress. Finally, we discuss the implications of our findings and potential extension of our self-paced treadmill for rehabilitation and assistive device studies.
Self-pacing Algorithm
We revised the self-pacing controller for force-instrumented treadmills proposed by Feasel and colleagues [15]. The central idea is to estimate subject walking speed from foot contact positions and to improve the estimations by incorporating force measurements using a Kalman filter. In our implementation, we track both speed and position with a Kalman filter, which is updated every time step. The filter uses noise matrices determined empirically from motion capture data. We provide a complete description of the algorithm and share the code [36] so that it can be easily used by other researchers.
Our self-pacing controller consists of a subject State Estimator and a treadmill Speed Controller (Fig. 1). The State Estimator takes data from two force plates (third-order Butterworth filter; cutoff frequency: 25 Hz) and the treadmill speed as input and estimates the subject's speed and position every computational time step, Δt. Based on the estimated speed and position, the Speed Controller adjusts the treadmill speed at the beginning of each footstep.
Self-paced treadmill controller. The self-paced treadmill controller consists of a State Estimator and a Speed Controller and only uses force plate data as sensory input
The State Estimator uses data from the two force plates to measure acceleration, velocity and position of a subject walking on the treadmill and combines the measured values with a Kalman filter. The vertical and fore-aft ground reaction forces (GRFs), fz and fy, as well as the center of pressure (COP) are calculated from the force-plate data. Foot contact is detected when the vertical GRF exceeds a certain threshold, fz>fz0=20% of body weight. We defined fore-aft foot position on a given step, yf1, as the COP at contact detection. Foot position on the prior step in the lab reference frame, yf0, is calculated by the COP at the previous contact plus the integral of the treadmill speed over the time between the contacts (yf1 and yf0 are shown in Fig. 1). We then estimate the fore-aft acceleration, velocity and position of the subject in the lab reference frame as
$$ a_{mes}=\frac{f_{y}}{m} $$
$$ \bar{v}_{mes} = \frac{y_{f1}-y_{f0}}{t_{1} - t_{0}} - \bar{v}_{tm} $$
$$ \bar{p}_{mes} \approx \frac{y_{f1}+y_{f0}}{2} $$
where m is the mass of the human subject, and t0 and t1 are times when each foot contact occurs, and the variables with a bar indicate mean values during that step (i.e. between consecutive foot contact detections). Eq. 1 is Newton's second law. Eq. 2 estimates the subject's mean speed in the lab reference frame, \(\bar {v}_{mes}\), by subtracting treadmill speed (\(\bar {v}_{tm}\)) from the subject's walking speed. The subject's walking speed is calculated as step length (yf1−yf0) divided by step time (t1−t0). Eq. 3 defines the subject's mean position, \(\bar {p}_{mes}\), as the middle of the leading and trailing foot placements at a new foot contact.
We implemented a Kalman filter to combine the measurement values ames, \(\bar {v}_{mes}\) and \(\bar {p}_{mes}\) to continuously estimate the subject's speed and position (Table 1). The filter keeps track of subject speed and position by predicting them every time step from ames (Table 1: line 2), and by correcting them with new measurements \(\bar {v}_{mes}\) and \(\bar {p}_{mes}\) every footstep (line 6). The measurement update is conducted when a new foot contact is detected (line 4). The filter rejects steps of unreasonable duration (greater than 1.2 seconds) to skip the measurement update when subjects cross over the belts (e.g. stepping on the left belt with the right foot). The system model, A and B (and the observation model C=I), describes the relationship between the measurement values according to Newton's second law. The noise matrices, Q and R, as well as the initial error covariance matrix P0 are determined from data collected in walking sessions, where two subjects walked on a treadmill at speeds between 0.8 and 1.8 m s −1 in ten one-minute trials. The noise matrices are set based on σa, σv and σp (Table 1), which are the differences in ames, \(\bar {v}_{mes}\) and \(\bar {p}_{mes}\), respectively, calculated from force-plate data and motion capture data. P0 is set to the mean of the values P converged to at the end of the pilot sessions.
Table 1 Pseudo code of Kalman filter for walking speed and position estimation
The Speed Controller adjusts the treadmill speed to match subject speed and to keep the subject near a baseline position. It updates the treadmill speed once per footstep when a new foot contact is detected. This is different from other self-paced treadmills in previous studies, where speed adjustment is done at a much faster rate (30 ∼120 Hz) [16–18]. Controlling the treadmill speed at a higher frequency can lead to undesired dynamics due to natural speed oscillations during walking. Instead of filtering out these oscillations as in the previous studies, we update it at every footstep. Target treadmill speed is set as
$$ v_{tm,tgt} = \bar{v}_{tm} + G_{v}\bar{v}_{KF} + G_{p}\left(\bar{p}_{KF} - p_{0} \right) $$
where p0 is the baseline position, and \(\bar {v}_{KF}\) and \(\bar {p}_{KF}\) are the subject's mean speed and position during the last step in the lab reference frame estimated from the Kalman filter. Note that, despite the plus signs, Eq. 4 is a stabilizing negative feedback as the treadmill speeds, vtm,tgt and \(\bar {v}_{tm}\), are determined in the opposite direction from the subject speed and position, \(\bar {v}_{KF}\) and \(\bar {p}_{KF}\), in the lab reference frame. The baseline position p0 can be predetermined by the experimenter (e.g. p0=0), manually tuned based on subject feedback, or set based on subject data from familiarization trials. In this study, we used the last approach, where we set p0 for each subject as the average subject position measured during the fixed-speed portion of the treadmill familiarization. In theory, vtm,tgt with Gv=1 will be a speed that matches the subject's estimated walking speed, and Gp=1 will result in a speed that brings the subject to p0 in 1 second. However, a controller with these high gains induced abrupt speed changes, which made it difficult for subjects to walk comfortably. Therefore, we use lower gains of Gv=0.25 and Gp=0.1, which we found to be reliable and responsive enough for our study. The treadmill acceleration is set to achieve a target velocity in a certain time as
$$ a_{tm,tgt} = \frac{\left (v_{tm,tgt} - \bar{v}_{tm} \right)}{\Delta t_{tm,tgt}} $$
where we use Δttm,tgt=0.5 s, similar to the duration of a walking step.
The code of our self-pacing controller and a graphical user interface are publicly available [36]. The self-pacing controller is implemented in Matlab/Simulink Real-Time and runs on a real-time target machine (Speedgoat) at 1000 Hz (i.e. Δt=0.001). The real-time target machine receives force-plate data from the instrumented treadmill (Bertec) at the same rate. The graphical user interface implemented in Matlab runs on a desktop machine at 100 Hz and allows the experimenter to communicate with the real-time target machine. In addition, it receives the target treadmill speed and acceleration from the real-time target machine and commands it to the treadmill.
Experiment 1: State Estimator
To evaluate the State Estimator, we compared the estimated position and velocity to those from motion capture data. One subject wore a waist belt with four reflective markers and walked on the force-instrumented treadmill for six one-minute trials. Treadmill speed was manually controlled in most of these trials as we wanted to evaluate the State Estimator independently from the Speed Controller. In the first three trials, the treadmill speed was set to 1.3, 0.8 and 1.8 m s −1. In the fourth trial, the treadmill speed changed every 10 sec from 0.8, 1.0, 1.2, 1.4, 1.6 to 1.8 m s −1. In the fifth trial, the same speeds were presented in reverse order. Then, the treadmill was controlled with our self-pacing controller in the last trial. Positions of the four reflective markers were captured with a motion capture system (Vicon Vantage; 8 cameras), sampled in 100 Hz and low-pass filtered using a third-order Butterworth filter with a cutoff frequency of 20 Hz. The mean of those maker positions, pmocap, and its time derivative, vmocap, were used for evaluation.
We report how the main outputs of the State Estimator \(\bar {v}_{KF}\) and \(\bar {p}_{KF}\) compare to those from motion capture data. For the mean step velocity, we report the root-mean-square (RMS) differences, \(RMS_{\bar {v}}=\sqrt {\frac {1}{n}\sum _{step=i}^{n}\left (\bar {v}_{KF,i} - \bar {v}_{mocap,i} \right)^{2}}\), where n is the total number of steps in a walking trial, and \(\bar {v}_{mocap,i}\) is the mean value of vmocap on the ith step. \(RMS_{\bar {p}}\) was calculated similarly, but with offset-corrected values for each one-minute trial. This is because \(\bar {p}_{KF}\) is not tracking the position of the waist. Our approach does not estimate the absolute position of the person's center of mass, but rather its position relative to the average center of pressure at consecutive foot strikes. Note that any measure of body position can be used to maintain a desirable position on the treadmill by comparing it to a corresponding nominal value, typically determined during a fixed speed calibration trial.
Experiment 2: Self-selected Walking Speed Tests
We conducted an experiment to evaluate the validity of our self-paced treadmill in measuring self-selected walking speeds. Ten healthy adults (5 females and 5 males; height: 1.69±0.08 m; age: 25±3 years) participated in the experiment. All subjects participated in a session that consists of familiarization trials and three blocks of five walking speed tests (Fig. 2-a). The familiarization trials were for the subjects to get familiar with walking on our self-paced treadmill and at their comfortable speed in different settings. In addition, the subject's baseline position, p0, was found in the fixed-speed portion of the treadmill familiarization. The five walking speed tests in each of the three blocks were presented in random order.
Experimental protocol for self-selected walking speed tests. a The protocol consists of a familiarization session and the main session organized into three blocks. The familiarization session consists of eight overground and treadmill walking trials, which in total takes about 25 minutes. In self-paced treadmill trials, the treadmill first starts at a slow speed, 0.8 m s −1, then switches to self-paced mode. Each of the blocks in the main session takes about 15 minutes and consists of five self-selected walking speed tests in random order. b The five walking speed tests consist of two overground and three treadmill tests. In the overground tests, subjects start to walk from standing at the experimenter's verbal sign "3, 2, 1, go," and the experimenter measures with a stopwatch the time it takes for the subject to traverse the middle 10 m sections. In the treadmill tests, the treadmill starts at 0.8 m s −1 then switches to either speed sweep mode in Manual Speed Selection or self-paced mode in Self-Paced 2 min and Self-Paced 150 m. In Self-Paced 150 m, a monitor shows a 150 m virtual track and a black circle tracking the subject's position
We compared five different self-selected walking speed tests. The settings and measurements of the tests are described in Fig. 2-b. Overground 10 m is the standard 10-meter walk test [9, 37] that we used as a reference point in evaluating the outcomes of the other tests. Overground 150 m is to check whether the standard test represents longer distance walking, as walking distance can affect self-selected speed [30]. Manual Speed Selection is a common way to measure preferred walking speed on a treadmill [10–12]. The correlation between the speed measures in Manual Speed Selection and those in Overground 10 m will be the benchmark value for our self-paced treadmill tests. Self-Paced 2 min and Self-Paced 150 m are the tests using our self-paced treadmill. Subjects were informed whether they would walk for 2 min or 150 m, and, for the latter, subject position was shown on a 150 m virtual track on a monitor in real-time. We applied both fixed-time and fixed-distance tests on the self-paced treadmill to determine whether it was necessary to motivate participants to walk a given distance in order to obtain self-selected walking speeds that correlated well with overground, fixed-distance tasks.
The self-selected walking speed tests were designed to be coherent and comparable with each other. For example, 150 m of walking distance in Self-Paced 150 m was selected to match the distance in Overground 150 m, and the 2 min of walking time in Self-Paced 2 min is the time it takes to walk 150 m at a typical walking speed of 1.25 m s −1. Similarly, in Self-Paced 2 min and Self-Paced 150 m, walking speeds were measured in six sections that correspond to the 10-meter-sections in Overground 150 m. We used consistent instructions in all the walking trials [34]. Subjects were instructed to "walk at a comfortable speed" in the overground and self-paced treadmill tests and to verbally indicate when the treadmill gets "faster (or slower) than what you would choose as a comfortable speed" in Manual Speed Selection. When subjects asked for clarification, we elaborated a comfortable speed as "whatever speed feels natural to you."
We compared self-selected walking speeds measured in each test to the value in the standard overground test. The main evaluation was how well walking speed in each test correlated with the speed in the standard test, Overground 10 m. We also compared self-selected speeds in Self-Paced 2 min and Self-Paced 150 m to see whether setting a target walking distance was necessary. In total, we measured 5 sets of 30 self-selected walking speeds: in the five tests, ten subjects walked for three times. For each walking speed test other than Overground 10 m, we report a linear model, b1vOG10+b0, that fits these 30 measurements to those in Overground 10 m with the minimum mean-squared-error. A test that has a fit of b1=1 and b0=0 indicates that subjects, on average, are likely to walk at the same speed they walked at in Overground 10 m. We also calculate the Pearson's linear correlation coefficient, R, in these pairs of 30 measurements. The correlation coefficient of 1 and 0 correspond to perfect and no correlation, respectively, where a high correlation indicates that much of the variation in measured speeds are captured in the fitted linear model. We considered the linear fit and correlation values to be statistically significant if their p-value is smaller than 0.05.
We calculated the variability of self-selected walking speed in each test to determine whether the self-paced treadmill tests were as consistent as the standard overground test. To this end, we calculated the standard deviation of the three walking speed measurements of the same subject within each test, SDintra. We compared these standard deviation values in each test to determine whether certain tests show higher variability than others.
We estimated the time taken to conduct one trial of each walking test to determine whether the self-paced treadmill tests required less time than the common treadmill test. We calculated the minimum time used in all trials in our experiments from the recorded data and report their mean and standard deviation for each walking test. The time for an Overground 10 m trial is calculated as TOG10=1.5×TOG10,rec+6×3, where TOG10,rec is the sum of six recorded times for crossing the 10 m section, multiplication of 1.5 accounts for the additional 5 m walk of the 15 m walkway, and the last term is the three-second countdowns before each of the six bouts. For TOG150 of the Overground 150 m test, we report the recorded time taken by subjects in completing the 150 m course plus 3 s for the countdown. The time used in the Manual Speed Selection, TMSS is reported as the duration the treadmill was controlled in speed sweep mode plus 3 s for the countdown. Similarly, the times used in Self-Paced 2 min, TSP2, and Self-Paced 150 m, TSP150, are reported as the duration the treadmill was in self-paced mode plus 3 s. Most of the reported times underestimate the actual time required for trials; for example, there were a few additional seconds between each of the six bouts in an Overground 10 m trial, and a few seconds spent before and after speed sweep and self-paced modes in the treadmill trials.
We calculated the time required for walking speed to converge in self-paced treadmill tests to determine the minimum duration of a test with reliable measurements. We observed that participants seemed to converge to steady speed in much less time than the approximately two minutes provided in self-paced walking speed tests. To determine the convergence time in Self-Paced 2 min, we first calculated the mean and standard deviation of walking speeds during the last 20%, or the last 24 seconds, of the trial. Then we found the moment when walking speed first entered the range of the mean plus or minus one standard deviation, and determined it to be the convergence time, tcnvg. We determined the convergence distance in Self-Paced 150 m similarly by setting the threshold from the mean and standard deviation of the last 30 m of the trial. Note that the initial treadmill speed was 0.8 m s −1 in all the self-paced treadmill trials.
We assessed subject experience in each walking speed test with a survey in order to determine whether the self-paced tests were comfortable and intuitive compared to the standard tests. Subjects rated two written statements for each test after completing all the walking trials. The statements were "it was comfortable walking" and "it was easy to choose my walking speed," and the subjects had five options: strongly disagree, disagree, neutral, agree, and strongly agree. We quantified the selections by assigning scores from 1 to 5 for strongly disagree to strongly agree, respectively.
The statistical significance of differences across walking speed tests, in terms of intra-subject variation, time to measure, and survey scores, was tested using two-way analysis of variance (ANOVA) accounting for different tests and subjects. If a significant effect of test type was found in ANOVA, we conducted paired-sample t-test for every pair of tests. We used significance level of α=0.05.
The proposed self-pacing controller successfully matched subject speed and kept subjects near the baseline position. In the exploration trial of the familiarization session, all subjects easily walked (or even ran) on the self-paced treadmill at a wide range of speeds (about 0 to 2 ms−1).
The State Estimator and motion capture system were in close agreement as to the subject speed and position. The RMS differences between estimations of the Kalman filter and motion capture system during the six one-minute trials were \(RMS_{\bar {v}}=0.023 \pm 0.003\) m s −1 and \(RMS_{\bar {p}}=0.014 \pm 0.008\) m. Figure 3 shows the Kalman filter estimations of the subject speed and position, vKF and pKF, and their mean values during each step, \(\bar {v}_{KF}\) and \(\bar {p}_{KF}\), as well as those values from the motion capture data. In addition, the speed and position calculated by merely integrating ground reaction forces are shown to diverge, demonstrating the necessity of the once-per-footstep measurement update of the Kalman filter. Time update using subject acceleration (Table 1: line 2) allows continuous and more accurate tracking of subject speed and position.
Estimations of State Estimator and motion capture system. The plots show the subject's estimated a instantaneous speed v, b mean speed of each step, \(\bar {v}\), c instantaneous position, p, and d mean position of each step, \(\bar {p}\). All speeds and positions are in the lab reference frame. The values are estimated with the proposed Kalman filter (black line), motion capture system (blue line), and by simply integrating the ground reaction forces (red line). e The data are collected during a one-minute trial where the treadmill speed, vtm, changes from 1.8 to 0.8 m s −1 as shown in the bottom plot. Panels at right provide an enlarged view of the final five seconds of data
All ten subjects completed the self-selected walking speed test protocol. In the standard Overground 10 m test, the mean and standard deviation of the self-selected walking speeds were 1.32±0.18 m s −1, ranging from 0.98 to 1.79 m s −1. Leg length, defined as the distance between anterior iliac spine and the medial malleolus, explained 20% of the variance in self-selected walking speed (R2=0.20,p=0.01), which agrees with previous studies [1].
Walking speeds measured in Overground 150 m were close to those in Overground 10 m. The fitted linear model was close to the identity line with a high correlation coefficient (Fig. 4-a). The mean and standard deviation of walking speeds were 1.35±0.19 m s −1. This result supports that the standard test, Overground 10 m, reliably measures walking speed in longer distance walking.
Speeds measured in the self-selected walking speed tests. The self-selected walking speeds measured in a Overground 150 m, b Manual Speed Selection, c Self-Paced 2 min, and d Self-Paced 150 m are compared to those from Overground 10 m. The data points relate a self-selected walking speed measured in a test to the one measured in the standard test in the same block. Each data point is a mean of four measurements (Fig. 2), with whiskers depicting ±1 standard deviation. The exception is for Manual Speed Selection, where the standard deviation is for two measurements because a pair of faster and slower than comfortable speeds are required to obtain one measurement of comfortable speed. Three data points from the same subject are connected with a line and marked in the same color. The linear model, correlation coefficient, and p-value for the fit are shown at the bottom right of each plot
Speeds in Manual Speed Selection were highly correlated with those in Overground 10 m but were slower overall. Walking speeds in Manual Speed Selection were 1.18±0.24 m s −1, which was significantly lower (p=0.01) than those in Overground 10 m (Fig. 4-b). This result agrees with previous studies with similar treadmill speed selection tests [10, 12]. The correlation value of R=0.89 between Manual Speed Selection and Overground 10 m is set as the benchmark for our self-paced treadmill tests.
Both Self-Paced 2 min and Self-Paced 150 m were highly correlated with Overground 10 m. The correlation coefficients of the self-paced treadmill tests (R=0.93 and R=0.94) were slightly higher than for Manual Speed Selection (Fig. 4-c and d vs. b). The walking speeds in self-paced treadmill tests were 1.23±0.28 m s −1 and 1.23±0.27 m s −1, respectively. The speeds were not significantly different from Overground 10 m speeds (p=0.13 in both tests) and were slightly closer than Manual Speed Selection speeds were. However, participants with slower overground walking speeds reduced their speed more on the treadmill. The three slowest subjects walked significantly slower in the self-paced treadmill tests compared to the standard test (0.87±0.11 vs. 1.11±0.07, p=6×10−5), while the remaining seven subjects did not (1.38±0.15 vs. 1.41±0.13, p=0.49).
Walking speeds measured in Self-Paced 2 min and Self-Paced 150 m were very similar. The fitted model was close to the identity line (vSP150=0.96vSP2+0.06), and the correlation coefficient was very high (R=0.98, p=7×10−20).
The intra-subject variabilities in all tests were low and were not significantly different (p=0.49). The average across all tests and participants was SDintra=0.042±0.030 m s −1. The variability values of individual tests were all lower than 0.1 m s −1, which has been suggested as a threshold for clinical significance of differences in walking speed [5, 6, 9].
The self-paced treadmill tests required about a third of the time required for Manual Speed Selection. The mean and standard deviation of the times required for a trial of each test were TOG10=87±9 s, TOG150=124±16 s, TMSS=371±141 s, TSP2=125±1 s, and TSP150=138±35 s. Walking speed test type had a significant effect on measurement time (ANOVA, p=4×10−37). All the tests were significantly different from each other (paired t-tests, p<0.002), except for Self-Paced 2 min and Self-Paced 150 m (p=0.051) and for Overground 150 m and Self-Paced 2 min (p=0.754). Manual Speed Selection took the longest on average and also was the most variable across subjects. The large time variation was due to some subjects having large gaps between the speeds identified to be faster or slower than comfortable speeds while others had smaller gaps.
Analysis of speed convergence in the self-paced treadmill tests suggests that the preset time and distance can be much shorter than 2 min and 150 m. The mean and standard deviation of the convergence time in Self-Paced 2 min were tcnvg=22±22 s while mean and standard deviation of convergence distance in Self-Paced 150 m were dcnvg=42±29 m (Fig. 5). This convergence distance corresponded to tcnvg=34±22 s in time, significantly longer than that in Self-Paced 2 min (p=0.048). This result suggests that the times used in the current Self-Paced 2 min (TSP2=125 s) and Self-Paced 150 m (TSP150=138 s) could be much shorter. For example, the average speed during the last five seconds of the first minute of the Self-Paced 2 min test is not statistically different from the current measure (p=0.89). This would require about one sixth the time of the conventional treadmill speed test.
Convergence of walking speeds in self-paced treadmill tests. Walking speeds normalized by final estimated speed in a Self-Paced 2 min and b Self-Paced 150 m tests. Walking speed from individual trials are shown in colored lines. The mean and ±1 standard deviation across all trials are shown as a black line and gray shaded area. The solid and dotted vertical lines indicate the mean and mean plus one standard deviation of convergence time and distance
The survey results suggested that subjects found walking at their comfortable speeds in the self-paced treadmill tests to be as comfortable as in the common treadmill speed selection test but not as comfortable as in overground tests. The mean and standard deviation of the scores for "it was comfortable walking" were 4.3±0.7 for Overground 10 m, 4.4±0.5 for Overground 150 m, 3.5±1.0 for Manual Speed Selection, 3.9±0.7 for Self-Paced 10 m, and 3.8±0.8 for Self-Paced 150 m, where 1 is strongly disagree and 5 is strongly agree. The scores for the "it was easy to choose my walking speed" statement were 4.4±0.7, 4.5±0.7, 3.0±1.2, 3.3±0.7 and 3.4±1.0, respectively. Speed test type had a significant effect on survey results (ANOVA, p=0.002 and 1×10−5,respectively). Comfort and ease of speed selection in self-paced tests were not significantly different from those in the conventional treadmill test (paired t-tests, p>0.10) but were worse than those in overground tests (p<0.053).
Our results indicate that the proposed self-paced treadmill can be used to measure self-selected walking speed. Subjects selected walking speeds in self-paced treadmill tests that were highly correlated with their speeds in the standard overground test. Intra-subject speed variations in the self-paced treadmill tests were low, demonstrating repeatability. The self-paced treadmill tests required only about a third of the time to complete of a common treadmill test, with no reduction in comfort or ease.
Although the walking speeds from self-paced treadmill tests highly correlated with the standard 10-meter walk test, the actual speeds were not the same. More specifically, subjects who walked at slow speeds in Overground 10 m walked even slower in Self-Paced 2 min and Self-Paced 150 m (Fig. 4-c,d). We can speculate different reasons for this observation. First, our self-pacing controller may be tuned better for normal and fast walking than walking at slow speeds. However, that would not explain why the slow walking subjects also selected slower speeds in Manual Speed Selection (Fig. 4-b). Second, which is more compelling in our opinion, contextual changes [31–33] other than segment dynamics (i.e. force interactions between subjects and the treadmill or ground) may have a larger effect during slower walking. The influence of these contextual changes may depend on walking speed because control strategies may change for different speeds [38, 39] as modeling studies suggest slower walking should rely more on active balance control than on passive dynamics [40]. This hypothesis could be tested by studying how the amount of context-induced gait changes correlate with walking speed. Whatever the reason, the strong correlation between self-paced and overground speeds suggests that changes in self-selected walking speed on the self-paced treadmill will translate into changes during overground walking, though the absolute magnitudes may differ.
Subjects selected to walk at very similar speeds on our self-paced treadmill whether they were walking for a preset time or a preset distance. This was unexpected because it would seem inconsistent with the minimum effort principle. So why did subjects walk at similar speeds in the preset time (Self-Paced 2 min) and present distance (Self-Paced 150 m) tests? First, subjects may have tried to fulfill the experimenter's expectation. We instructed the subjects to walk at their comfortable speed in all five tests, which the subjects may have interpreted as walking at a particular speed. However, such interpretation or intent of matching experimenter expectation was not apparent from subject feedback. Second, it could be that the objective of walking for a preset time was not clear to subjects because it is different enough from other walking tasks that they had experienced. Walking for a preset distance is close to walking to a target location, which is very common in daily life. Walking or running on a treadmill in a gym for a preset time as a workout might seem similar but is different from the preset time test in our study, in that the speed is usually set based on energy expenditure goals. For the unique task of walking for a preset time in an experiment, subjects may have aimed to walk in a way they were most familiar with, which is to walk for a preset distance. Regardless of the reason, all subjects in our study self-selected to walk at similar speeds in the preset time and preset distance tests. Therefore, we can use the preset time on a self-paced treadmill to measure self-selected walking speeds, which can be easier to administer than for preset distance.
The proposed self-pacing controller is different from most previous controllers in that it uses data from treadmill force plates to estimate subject speed and position. Therefore, it requires a force-instrumented treadmill, and subjects should not cross over the belts when stepping, which can interfere with their natural gait. However, stepping on the correct belt on an instrumented treadmill is a common requirement for gait studies [13], in which case, the self-pacing controller can be used with little overhead. We have previously tested other approaches that require additional parts on subjects, such as motion capture markers or string potentiometers, and those setups can easily increase the burden in complex gait experiments, such as studies on robotic exoskeletons or prostheses [14, 41]. Improving the performance of the self-pacing controller in the presence of cross-over steps would allow it to be used in additional protocols or when using single-belt instrumented treadmills. To enable position updates during cross-over steps, the algorithm should be able to estimate the timing and position of new foot contacts without the assumption that each step is made on the corresponding belt. Using the COP estimated from GRFs combined from both belts and sensing abrupt changes in this COP could be an effective approach.
Another difference from most prior self-pacing controllers is that ours adjusts the treadmill speed only once per footstep. Most other self-paced treadmill controllers update treadmill speed at a higher frequency (30 ∼120 Hz) [16–18]. If the treadmill speed instantaneously matches subject body speed, it will fluctuate within every stride due to natural speed oscillations in normal walking (Fig. 3-a) and may introduce undesired treadmill dynamics. To minimize this effect, previous studies low-pass filtered the estimated body state with a low cutoff frequency (e.g. 2 Hz), which can introduce time delays. Instead, our controller updates the treadmill speed once-per-footstep based on the mean values in that footstep. We find our approach to be conceptually more consistent with the control goal of matching walking speed, not instantaneous speed. A more thorough investigation of treadmill speed adjustment strategies could be instructive and might improve the self-pacing controller. For example, we use a simple heuristic control scheme (Eq. 4) with low control gains in matching subject speed and position, which is similar to previous approaches [16]. While higher gains can respond more quickly to speed and position changes, we empirically found lower gains to be stable and reliable for walking at steady speeds and moderate speed changes. Gain scheduling that matches large speed changes as well as steady walking would extend the potential use of self-paced treadmills in gait studies.
In the future, this self-pacing controller could be extended to address additional locomotion behaviors and its usability could be improved. We expect that the controller could be extended to running and to inclined and declined surfaces with only minimal changes. The human response to the controller under these conditions would need to be tested in an experiment similar to the one described in this study. It should be possible to create a version of the self-pacing software that runs on a personal computer without a real-time target machine, which would allow additional researchers to use the technique. We plan to update our repository with such extensions as they occur [42].
The proposed self-paced treadmill can be used in rehabilitation treatment and in gait assistance research but should be re-validated for substantially different populations or tasks. All of the subjects that participated in our experiment found walking on the self-paced treadmill intuitive and easy. However, the subtle dynamics and apparent contextual differences induced by self-paced treadmills may have a larger effect for subjects with different health status or for different locomotion tasks. For example, it has been reported that children with cerebral palsy experienced larger changes in gait on a self-paced treadmill than typically developing children [35]. Nevertheless, for healthy adults walking at typical speeds, self-selected walking speed on this self-paced treadmill can be used as an indication of overground walking behavior.
We presented a self-paced treadmill controller for force-instrumented treadmills that can be used to measure self-selected walking speeds. The controller is adapted from a previous study [15] and solely uses force-plate data to estimate and adapt to the subject's walking speed and position. To validate its use for measuring self-selected walking speeds, we compared walking speeds measured in a range of walking speed tests, where the subjects were instructed to walk at or select their comfortable speed. The tests using our self-paced treadmill measured walking speeds that were highly correlated with those from the standard overground test. The differences in the measured speeds from the self-paced treadmill and overground tests were small and consistent. The low intra-subject variability of measured speeds supports the reliability of the self-paced treadmill tests. The times required for the self-paced treadmill tests were a few times less than that for a common treadmill test, where subjects manually select their comfortable speeds, with the potential for further substantial reductions in duration. Subjects found the self-paced treadmill tests to be as comfortable and easy as the common treadmill test. These results demonstrate that measurements of self-selected walking speed made using the self-paced treadmill are relevant to overground conditions, and that the self-paced treadmill provides a strong alternative to manual speed selection on an instrumented treadmill. We provide a complete description and code for the self-pacing controller and graphical user interface to facilitate use by other gait researchers and clinicians [36].
Main data of the study are included in the supplementary information files. All data collected in the study are available from the corresponding author on reasonable request. The code for self-paced treadmill is available in the self-paced-treadmill repository on GitHub [36].
GRF:
ground reaction force
COP:
center of pressure
RMS:
Bohannon RW. Comfortable and maximum walking speed of adults aged 20—79 years: reference values and determinants. Age Ageing. 1997; 26(1):15–9.
Ralston HJ. Energy-speed relation and optimal speed during level walking. Internationale Zeitschrift für Angewandte Physiologie Einschliesslich Arbeitsphysiologie. 1958; 17(4):277–83.
Srinivasan M. Optimal speeds for walking and running, and walking on a moving walkway. Chaos: An Interdiscip J Nonlinear Sci. 2009; 19(2):026112.
Song S, Geyer H. Predictive neuromechanical simulations indicate why walking performance declines with ageing. The J Physiol. 2018; 596(7):1199–210.
Purser JL, Weinberger M, Cohen HJ, Pieper CF, Morey MC, Li T, Williams GR, Lapuerta P. Walking speed predicts health status and hospital costs for frail elderly male veterans. J Rehabil Res Dev. 2005; 42(4).
Hardy SE, Perera S, Roumani YF, Chandler JM, Studenski SA. Improvement in usual gait speed predicts better survival in older adults. J Am Geriatr Soc. 2007; 55(11):1727–34.
Goldie PA, Matyas TA, Evans OM. Deficit and change in gait velocity during rehabilitation after stroke. Arch Phys Med Rehabil. 1996; 77(10):1074–82.
Wolf SL, Catlin PA, Gage K, Gurucharri K, Robertson R, Stephen K. Establishing the reliability and validity of measurements of walking time using the emory functional ambulation profile. Phys Ther. 1999; 79(12):1122–33.
Fritz S, Lusardi M. White paper:"walking speed: the sixth vital sign". J Geriatr Phys Ther. 2009; 32(2):2–5.
Dal U, Erdogan T, Resitoglu B, Beydagi H. Determination of preferred walking speed on treadmill may lead to high oxygen cost on treadmill walking. Gait & Posture. 2010; 31(3):366–9.
Nagano H, Begg RK, Sparrow WA, Taylor S. A comparison of treadmill and overground walking effects on step cycle asymmetry in young and older individuals. J Appl Biomech. 2013; 29(2):188–93.
Malatesta D, Canepa M, Fernandez AM. The effect of treadmill and overground walking on preferred walking speed and gait kinematics in healthy, physically active older adults. Eur J Appl Physiol. 2017; 117(9):1833–43.
Lee SJ, Hidler J. Biomechanics of overground vs. treadmill walking in healthy individuals. J Appl Physiol. 2008; 104(3):747–55.
Zhang J, Fiers P, Witte KA, Jackson RW, Poggensee KL, Atkeson CG, Collins SH. Human-in-the-loop optimization of exoskeleton assistance during walking. Science. 2017; 356(6344):1280–4.
Feasel J, Whitton MC, Kassler L, Brooks FP, Lewek MD. The integrated virtual environment rehabilitation treadmill system. IEEE Trans Neural Syst Rehabil Eng. 2011; 19(3):290–7.
Sloot L, Van der Krogt MM, Harlaar J. Self-paced versus fixed speed treadmill walking. Gait & Posture. 2014; 39(1):478–84.
Yoon J, Park H-S, Damiano DL. A novel walking speed estimation scheme and its application to treadmill control for gait rehabilitation. J Neuroeng Rehabil. 2012; 9(1):62.
Plotnik M, Azrad T, Bondi M, Bahat Y, Gimmon Y, Zeilig G, Inzelberg R, Siev-Ner I. Self-selected gait speed-over ground versus self-paced treadmill walking, a solution for a paradox. J Neuroeng Rehabil. 2015; 12(1):20.
GRAIL - Motekforce Link. https://www.motekmedical.com/product/grail/, Accessed: 2-20-2020.
Sloot LH, Harlaar J, Van der Krogt MM. Self-paced versus fixed speed walking and the effect of virtual reality in children with cerebral palsy. Gait & Posture. 2015; 42(4):498–504.
Van der Krogt MM, Sloot LH, Buizer AI, Harlaar J. Kinetic comparison of walking on a treadmill versus over ground in children with cerebral palsy. J Biomech. 2015; 48(13):3577–83.
Fung J, Richards CL, Malouin F, McFadyen BJ, Lamontagne A. A treadmill and motion coupled virtual reality system for gait training post-stroke. CyberPsychol Behav. 2006; 9(2):157–62.
Gates DH, Darter BJ, Dingwell JB, Wilken JM. Comparison of walking overground and in a computer assisted rehabilitation environment (CAREN) in individuals with and without transtibial amputation. J Neuroeng Rehabil. 2012; 9(1):81.
Kim J, Gravunder A, Park H-S. Commercial motion sensor based low-cost and convenient interactive treadmill. Sensors. 2015; 15(9):23667–83.
Minetti AE, Boldrini L, Brusamolin L, Zamparo P, McKee T. A feedback-controlled treadmill (treadmill-on-demand) and the spontaneous speed of walking and running in humans. J Appl Physiol. 2003; 95(2):838–43.
Von Zitzewitz J, Bernhardt M, Riener R. A novel method for automatic treadmill speed adaptation. IEEE Trans Neural Syst Rehabil Eng. 2007; 15(3):401–9.
Sloot L, Van der Krogt MM, Harlaar J. Energy exchange between subject and belt during treadmill walking. J Biomech. 2014; 47(6):1510–3.
Snaterse M, Ton R, Kuo AD, Donelan JM. Distinct fast and slow processes contribute to the selection of preferred step frequency during human walking. J Appl Physiol. 2011; 110(6):1682–90.
Graham JE, Ostir GV, Kuo Y-F, Fisher SR, Ottenbacher KJ. Relationship between test methodology and mean velocity in timed walk tests: a review. Arch Phys Med Rehabil. 2008; 89(5):865–72.
Seethapathi N, Srinivasan M. The metabolic cost of changing walking speeds is significant, implies lower optimal speeds for shorter distances, and increases daily energy estimates. Biology letters. 2015; 11(9):20150486.
Mohler BJ, Thompson WB, Creem-Regehr SH, Pick HL, Warren WH. Visual flow influences gait transition speed and preferred walking speed. Exp Brain Res. 2007; 181(2):221–8.
O'Connor SM, Donelan JM. Fast visual prediction and slow optimization of preferred walking speed. J Neurophysiol. 2012; 107(9):2549–59.
McIntosh GC, Brown SH, Rice RR, Thaut MH. Rhythmic auditory-motor facilitation of gait patterns in patients with Parkinson's disease. J Neurol Neurosurg Psychiatry. 1997; 62(1):22–6.
Brinkerhoff SA, Murrah WM, Hutchison Z, Miller M, Roper JA. Words matter: Instructions dictate "self-selected" walking speed in young adults. Gait & Posture. 2019. https://www.sciencedirect.com/science/article/abs/pii/S0966636219303522.
Van der Krogt MM, Sloot LH, Harlaar J. Overground versus self-paced treadmill walking in a virtual environment in children with cerebral palsy. Gait & Posture. 2014; 40(4):587–93.
Self-paced-treadmill repository. https://github.com/smsong/self-paced-treadmill/tree/JNER2020, Accessed: 2-20-2020.
Chan WL, Pin TW. Reliability, validity and minimal detectable change of 2-minute walk test, 6-minute walk test and 10-meter walk test in frail older adults with dementia. Exp Gerontol. 2019; 115:9–18.
Helbostad JL, Moe-Nilssen R. The effect of gait speed on lateral balance control during walking in healthy elderly. Gait & Posture. 2003; 18(2):27–36.
Fettrow T, Reimann H, Grenet D, Crenshaw J, Higginson J, Jeka JJ. Walking cadence affects the recruitment of the medial-lateral balance mechanisms. Frontiers in Sports and Active Living. 2019; 1:40.
Hobbelen DG, Wisse M. Controlling the walking speed in limit cycle walking. Int J Robot Res. 2008; 27(9):989–1005.
Chiu VL, Voloshina A, Collins S. An ankle-foot prosthesis emulator capable of modulating center of pressure. IEEE Trans Biomed Eng. 2019; 67(1):166–176.
Self-paced-treadmill repository. https://github.com/smsong/self-paced-treadmill, Accessed: 2-20-2020.
The authors thank all participants of this study as well as Maxwell Donelan and Arthur Kuo for discussions about preset time and preset distance walking.
This material is based upon work supported by the National Science Foundation under Grant No. CMMI-1734449.
Department of Mechanical Engineering, Stanford University, Stanford, CA, USA
Seungmoon Song, Hojung Choi & Steven H. Collins
Seungmoon Song
Hojung Choi
Steven H. Collins
SS and SC conceived the study and designed the experiment, SS and HC developed the algorithm, SS conducted experiments and analyzed data, SS drafted the manuscript, SS and SC edited the manuscript, and all authors approved the submitted manuscript.
Correspondence to Seungmoon Song.
Ethical approval for the study was granted by the Stanford University Institutional Review Board. All participants provided written informed consent.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.
Song, S., Choi, H. & Collins, S.H. Using force data to self-pace an instrumented treadmill and measure self-selected walking speed. J NeuroEngineering Rehabil 17, 68 (2020). https://doi.org/10.1186/s12984-020-00683-5
Self-paced treadmill
Self-selected walking speed
Force-instrumented treadmill
|
CommonCrawl
|
Does the probability of multiple independent events follow a normal distribution?
I'm heavily into role-playing systems, scripted a set of utilities in ruby for pen-and-paper games and I sort of understood statistics when I took it, but I could never for the life of me figure out the following:
Given a varying-length series of independent 50/50 probabilities, i.e. 2, 3, 5 or 7 coin tosses, would the total number of successes (heads) have the same probability distribution as a fair roll of a die with number of sides equivalent to the number of coins + 1 (given that coins can produce all failures -- a zero)?
This is a comparison, say, between the d20 system used by DnD and the multiple success-check dice used by Abberant. So in other words, would rolling a 20-sided die and flipping 20 coins and counting the heads result in the same probability distribution?
I would be interested in a link that explains and visually represents the difference between distributions of independent events and a single event.
EDIT: I was able to satisfy my basic curiosity with the above script, modified. No hard mathematical answer, but I'm not well-versed in this area anyway. Figured it was worth learning a bit though.
probability distributions dice
Alex NyeAlex Nye
$\begingroup$ In case you're still interested, the outcome of rolling a 20-sided die is uniform over all faces of the die, so the probability of observing any individual face is $\frac{1}{20}$. However, the number of heads on 20 coin flips follows a binomial distribution where the probability of $k$ heads is $\binom{20}{k}\left(\frac{1}{2}\right)^{20}$. $\endgroup$ – Max Sep 2 '12 at 6:45
$\begingroup$ Thanks! I noticed it wasn't really even but I wanted to know what was really going on. $\endgroup$ – Alex Nye Sep 2 '12 at 7:33
$\begingroup$ en.wikipedia.org/wiki/Binomial_distribution en.wikipedia.org/wiki/Uniform_distribution_(discrete) $\endgroup$ – Douglas Zare Sep 2 '12 at 8:20
One simple way to think about probabilities is to enumerate all the possibilities and weight them according to their likelihoods, and then you can just count the number of different combinations. (This approach will work when the events are discrete and few in number.)
Let's start with a fair 4-sided die (you'll see the reason for this odd choice soon) and a fair coin. By "fair", I mean that all the possible outcomes are equally likely. That is, the likelihood of getting a 1 is exactly the same as getting a 4, for example, and that the likelihood of getting heads is exactly the same as getting tails. As @MichaelChernick points out, if you were to flip the coin 4 times and count the number of heads, you would find there are 5 possible final counts, so we will equalize the ranges by flipping the coin only 3 times, and subtracting 1 from the result of the die roll. Thus, both will range from 0 to 3. Now we can lay out the possibilities.
For the die:
{0, 1, 2, 3}
All of which are equally likely, because we've stipulated the die is fair, and so each possible event has a probability of 1/4.
For the coin, it's more complicated. Each individual flip is {H, T}, but we ultimately want to know about the combination of 3 different flips, so we need to lay out the combinatorics and then count the resulting number of heads.
possibility 1 2 3 number of heads
1: {H, H, H, 3
2: H, H, T, 2
3: H, T, H, 2
4: H, T, T, 1
5: T, H, H, 2
6: T, H, T, 1
7: T, T, H, 1
8: T, T, T} 0
We see here that the number of possibilities grows exponentially with each additional flip. Specifically, it doubles, because there are two possible outcomes of each flip; if we were flipping a 3-sided coin (assuming one could exist) it would triple. If we had thrown a 6-sided die and flipped the coin 5 times, it would have taken 32 lines to lay out all the possible combinations of outcomes. In our case, with three flips, it took $2^3=8$ lines to lay out all of the possibilities.
Now (again) because we stipulated that heads and tails are equally likely, and because we have laid out all the combinations, each possibility is equally likely, having a probability of 1/8. But note that this does not mean that each possible number of heads is equally likely. There are three different ways to end up getting 1 head, and three ways to end up with 2 heads, but only 1 way each of getting 0 or 3 heads. Thus, the probability of getting 0 heads is 1/8, of getting 1 head is 3/8, of getting 2 heads is 3/8, and of getting 3 heads is 1/8. In general, the probability of getting the number of "successes" $k$ (i.e., heads in our case) out of the number of "trials" $n$ (flips), where each trial has a probability of success $p$, will be: $$ p(k)=\frac{n!}{k!(n-k)!}p^k(1-p)^{n-k} $$ As a result of this, "the coin tosses sort of lump in the middle", as you suspect, and as you can clearly see:
These distributions do have things in common (which I think is what is confusing you): they are both discrete, have the same number of possible results (at least under our modified arrangement), and have the same average (more technically expected value). However, as their Wikipedia pages state, they differ in their higher moments; namely their variances (uniform: $(n^2-1)/12$ vs. binomial: $np(1-p)$ ) and kurtoses (uniform: $-(6n^2+6)/(5n^2-5)$ vs. binomial: $(1-6p+6p^2)/(np(1-p))$ ) differ.
On a slightly different note, the title of the question asks about how the distribution of coin flips will compare to the normal distribution, while the body of the question asks about how it compares to a discrete uniform. My answer here has mostly focused on the body of the question, but I should mention that the binomial is sometimes colloquially referred to as the "discrete normal" distribution. One way to think about this is that as the number of coin flips continues on towards infinity, it's binomial distribution will converge to the (continuous) normal distribution.
gung♦gung
$\begingroup$ Thank you. A fantastic answer. I ended up reading a bit more on how to code a function for a random number on a normal distribution and used it in the roleplay utilities. Bell Curves just feel so right. $\endgroup$ – Alex Nye Sep 3 '12 at 7:38
$\begingroup$ Gung's points out that although the distributions look similar they are different and as I explained the probability space is different. One deals with the number rolled from a single die whuile the other represents the sum of several outcomes from coin tossing. Also -1 for the die distribution. Even if there were a 4 sided die as in the example. Dice nornally do not have 0 as a number on the side. Also I think it would be more appropriate to compare the actual result from a 6 sided die with six tosses of a fair coin. Notice that even in Gung's example the moments are different. $\endgroup$ – Michael Chernick Sep 3 '12 at 11:25
$\begingroup$ @MichaelChernick, I recognize that dice don't have a 0. My idea was to equalize the ranges by subtracting 1 from the result of the die roll. Thus, eg, if you rolled a 1, then subtracted 1, your result would be 0. OTOH, if you rolled a 4, then subtracted 1, the result would be 3. Also, the reason I didn't use a 6-sided die is that the coin flips would have required 32 lines to lay out all the possibilities, & I'm just too lazy for that. $\endgroup$ – gung♦ Sep 3 '12 at 12:27
The answer is no. The outcomes aren't even the same.
To address the new question if you take a six sided die a roll it once you get a uniform distribution on the integers from 1 to 6. That means each outcome has probability 1/6 and the expected value is 3.5. If you toss 6 coins at random (or equivalently one coin 6 times you get a binomial distribution for the number of heads with parameters n=6 and p=1/2. the expected number of heads is 6/2 =3. Another difference is that you can get 0 heads. So for the number of heads the possible outcomes are 0,1,2,3,4,5,and 6 while the die has possible count 1,2,3,4,5, and 6.
Michael ChernickMichael Chernick
$\begingroup$ So do the coin tosses sort of lump in the middle around the half-way point of the number of coins? That's my intuition from what I've seen but I'm looking for hard numbers. $\endgroup$ – Alex Nye Sep 2 '12 at 4:29
Not the answer you're looking for? Browse other questions tagged probability distributions dice or ask your own question.
Designing a test for a psychic who says he can influence dice rolls
Dungeons & Dragons Attack hit probability success percentage
How to calculate the probability of the outcome of this convoluted dice rolling mechanic?
Buckets of dice - Binomial probabilities of either event occuring
How to calculate the probability of reaching a certain number when rolling a six allows you to re-roll and add to sum?
What Method / Tool Will Help Me Choose The Dice I Need To Fit Particular Specs?
A bag contains 8 fair dice as well as two rigged dice with 4 dots on all six sides
minimum number of rolls necessary to determine how many sides a die has
Rolling one die after another
How many times must I roll a die to confidently assess its fairness?
|
CommonCrawl
|
A spatio-temporal autoregressive model for monitoring and predicting COVID infection rates
Peter Congdon ORCID: orcid.org/0000-0003-1934-92051
Journal of Geographical Systems volume 24, pages 583–610 (2022)Cite this article
The COVID-19 epidemic has raised major issues with regard to modelling and forecasting outcomes such as cases, deaths and hospitalisations. In particular, the forecasting of area-specific counts of infectious disease poses problems when counts are changing rapidly and there are infection hotspots, as in epidemic situations. Such forecasts are of central importance for prioritizing interventions or making severity designations for different areas. In this paper, we consider different specifications of autoregressive dependence in incidence counts as these may considerably impact on adaptivity in epidemic situations. In particular, we introduce parameters to allow temporal adaptivity in autoregressive dependence. A case study considers COVID-19 data for 144 English local authorities during the UK epidemic second wave in late 2020 and early 2021, which demonstrate geographical clustering in new cases—linked to the then emergent alpha variant. The model allows for both spatial and time variation in autoregressive effects. We assess sensitivity in short-term predictions and fit to specification (spatial vs space-time autoregression, linear vs log-linear, and form of space decay), and show improved one-step ahead and in-sample prediction using space-time autoregression including temporal adaptivity.
Avoid the common mistakes
Forecasts of future infectious disease incidence have had major policy importance, for example in the COVID-19 epidemic of 2020-2021. However, even short-term forecasts may face difficulties in practice. These include limited data, quantifying forecast uncertainty, and specification issues (Petropoulos and Makridakis 2020; Roda et al. 2020; Stehlík et al. 2020). Where separate infection time series for a number of areas are available, this may assist forecasts through a borrowing strength mechanism (Haining et al. 2021), with Shand et al. (2018) noting the gain from taking "advantage of the spatial and temporal dependence structures so that the statistical inference at one location can borrow strength from neighbouring regions in both space and time". However, modelling and predicting area trajectories in infectious disease poses particular problems when counts are changing rapidly in epidemic situations, and there may well be geographic infection hotspots.
Notions of borrowing strength through spatial random effects are a major feature of the Bayesian disease mapping approach for area disease counts (Kang et al. 2016), and adaptations of disease mapping to modelling longitudinal infectious disease data have been discussed in a number of papers (e.g. Clements et al. 2006; Coly et al. 2021). Consider, in particular, applications to epidemic time series for sets of administrative areas, which are available in several countries for the COVID-19 epidemic. A widely adopted strategy for such data, aiming at short term prediction, involves low order autoregression in infectious disease counts or rates, in both an area itself (the focus area), and in areas surrounding the focus area (Shand et al. 2018; Paul and Held 2011). Existing approaches have focussed on spatial variation in autoregressive dependence, so allowing for geographic heterogeneity (Dowdy et al. 2012).
The contribution and novelty of this paper is to show how different specifications of autoregressive dependence in incidence counts may considerably impact on adaptivity in epidemic situations. In particular, we introduce temporal as well as spatial variation in autoregressive dependence and show that this feature provides much improved predictive performance in situations where infection counts are rapidly changing.
Such rapid fluctuations in cases, associated with multiple epidemic waves, have been a feature of the COVID-19 epidemic. Sharp upward trends in cases have initially tended to be geographically concentrated, with subsequent diffusion away from initial hotspots (Dowdy et al. 2012). Effective policy responses in such situations depend on forecasting approaches that provide a perspective on short-term future implications of current trends (Shinde et al. 2020). In particular, geographically disaggregated forecasts are important for prioritizing interventions or severity designations, such as the "local tiers" in the UK COVID-19 policy response (Hunter et al. 2021).
The approach used here can potentially be generalized to model longitudinal count data in non-disease applications involving areas, or for longitudinal count data for units other than areas. An example of the former might be applications involving spatial forecasting and spatial diffusion of count data (e.g. Glaser 2017; Glaser et al. 2021). Examples of such diffusion include behavioural copycat effects (Schweikert et al. 2021).
In this paper, we assess predictive performance of an autoregressive model for infectious disease counts, applied to COVID-19 data for 144 English local authorities during the UK epidemic second wave—at the end of 2020 and into early 2021. These local authorities are in the South East of England, where a sharp (and geographically concentrated) upturn in cases in late 2020 was attributed to the emergence of a new COVID variant, the "Kent variant" or alpha variant (World Health Organization 2021). The model proposed here allows for both spatial and time variation in autoregression coefficients. We show clear gains in prediction over a less general specification. Impacts of alternative model features are considered, namely the choice between a linear (identity link) or log-linear model form, and the assumed form of weighting infections in neighbouring areas. We use Bayesian inference and estimation, via the BUGS (Bayesian inference Using Gibbs Sampling) package (Lunn et al. 2009).
The typical form of data encountered in analysis of spatio-temporal infections data involves incidence counts \(y_{it}\) for areas \(i=1,...,N\) and times \(t=1,...,T\). However, some spatio-temporal models for such data have used normalizing transformations of originally count data. Thus, Shand et al. (2018) consider a logarithmic transformation of yearly HIV diagnosis rates (per 100,000 population) for US counties.
Alternatively for models applied specifically to counts, Poisson and negative binomial time series regression methods may be used. Other count distributions may be used (Jalilian and Mateu 2021; Yu 2020). Spatio-temporal adaptations of disease mapping have been applied to analysis of infections, including across and within area random walks (e.g. Zhang et al. 2019; Jalilian and Mateu 2021; Lowe et al. 2021). Both Shand et al. (2018) and Paul and Held (2011), use spatially varying auto-regression applied either to lagged infection counts in an area itself (the focus area), or to areas surrounding the focus area (the neighbourhood), or both. A geographically adaptive scheme is also used by Lawson and Song (2010) in analysis of foot and mouth disease data. Lawson and Song (2010) use a focus area and neighbourhood lag in flu infection counts as an offset (with known coefficient) in Poisson regression, with an application to COVID forecasts by area in Sartorius et al. (2021). Applications to COVID-19 forecasting, based on Paul and Held (2011), are provided by Giuliani et al. (2020) and Rui et al. (2021). Detection of space-time clusters in COVID-19 is exemplified by Martines et al. (2021).
For applications without spatial disaggregation, a wide range of methods have been used for COVID-19, and infectious diseases generally. These include autoregressive integrated moving average (ARIMA) models (e.g. Maleki et al. 2020; Chintalapudi et al. 2020; Petukhova et al. 2018), integer-valued autoregressive (INAR) models (Chattopadhyay et al. 2021), exponential smoothing (Petropoulos and Makridakis (2020); Gecili et al. (2021)), or bivariate forecasts. For example, the study by Johndrow et al. (2020) models COVID deaths as a lagged function of earlier new cases. For infectious diseases with an established seasonal pattern, SARIMA (seasonal ARIMA) forecasting has been used (Qiu et al. 2021). Applications of phenomenological models to COVID-19 incidence forecasts—based on mathematical representations of epidemic curves, such as the Richards model (Richards 1959)—include Roosa et al. (2020).
We focus here on infectious disease models using count data regression. We consider first models for count time series, without area disaggregation, as these can provide a basis for generalisation to area-time data. Relevant specifications may specify AR dependence on previous counts, or on previous latent means; models with autoregressive (AR) dependent errors (Hay and Pettitt 2001) may also be considered (Jalilian and Mateu 2021).
Time dependent autoregressive count data models
Consider Poisson distributed counts at times \(t=1,...,T,\) namely \(y_{t}\) \( \thicksim Poi(\mu _{t}),\) (with Poi for Poisson density, with means \(\mu _{t}),\) or negative binomial (NB) counts, \(y_{t}\) \(\thicksim Negbin(\mu _{t},\Omega )\) (with Negbin for negative binomial density, with means \( \mu _{t}\) and dispersion parameter \(\Omega )\). The parameterisation of the negative binomial is as in Zhou et al. (2012), namely
$$\begin{aligned} p(y|\mu ,\Omega )=\frac{(y+\Omega -1)!}{y!(\Omega -1)!}\left( \frac{\mu }{ \mu +\Omega }\right) ^{y}\left( \frac{\Omega }{\mu +\Omega }\right) ^{\Omega }. \end{aligned}$$
In a simple autoregressive representation (Fokianos 2011), one may adopt an identity link, and, subject to suitable parameter constraints, specify AR1 (AR with first-order lag) dependence in lagged counts \(y_{t-1}\) and in latent means \(\mu _{t-1}.\) The general form of this representation is termed the autoregressive conditional Poisson (ACP) model by Heinen (2003), or the linear model by Fokianos (2011). Thus
$$\begin{aligned} \mu _{t}=\phi +\alpha y_{t-1}+\gamma \mu _{t-1}, \end{aligned}$$
where \(\phi \), \(\alpha ,\) and \(\gamma \) are all positive. An alternative log-linear model (Fokianos and Tjøstheim 2011) has a log-link with
$$\begin{aligned} \log (\mu _{t})=\nu _{t}=f+a\log (y_{t-1}+1)+c\nu _{t-1}, \end{aligned}$$
where \(\nu _{t}\) and \(\nu _{t-1}\) are the logarithms of \(\mu _{t}\) and \(\mu _{t-1}\) respectively, f is an intercept, and a and c are autoregressive coefficients.
In both Eqs. (1) and (2), the autoregressive coefficients could be taken as time varying, namely \(\{\alpha _{t},\gamma _{t}\}\) and \(\{a_{t},c_{t}\}.\) Varying intercepts to represent time dependent effects other than autoregressive, could also be added. For example in Eq. (1), one may take
$$\begin{aligned} \phi _{t}=\exp (\phi _{0}+\eta _{t}), \end{aligned}$$
where \(\eta _{t}\thicksim \mathcal {N}(\eta _{t-1},\sigma _{\eta }^{2})\) is a random walk with variance \(\sigma _{\eta }^{2}\). However, random coefficients research so far have concentrated on random coefficient AR models, without lags in latent means (e.g. Sáfadi and Morettin 2003).
Random coefficient autoregressive area-time models
To generalize these representations to area-time infection count data (areas \(i=1,...,N\)), one may add lags to infection counts in spatially close areas (Martines et al. 2021). These reflect geographic infection spillover—due, for example, to social interactions between residents in different areas, or to cross boundary commuting (Mitze and Kosfeld 2021). To allow for spatial lag effects, let \(w_{ij}\) be row standardised spatial weights expressing spatial interaction between areas i and j, with \(\underset{j}{\sum } w_{ij}=1\). They may be based on adjacency of areas, or distances between them. For example, let \(h_{ij}=1\) for adjacent areas (with \(h_{ii}=0\)), and \( h_{ij}=0\) otherwise. Then, \(w_{ij}=h_{ij}/\underset{j}{\sum }h_{ij}.\) Consider Poisson distributed counts \(y_{it}\) \(\thicksim Poi(\mu _{it}),\) or NB counts, \(y_{it}\) \(\thicksim Negbin(\mu _{it},\Psi ).\)
As in panel data analysis (Greene 2011), randomly varying autoregressive parameters can be used to allow for different epidemic trajectories in different areas. The most general representation would allow interactive autoregressive coefficients varying simultaneously by time and area. We also allow for area specific permanent effects \(\varepsilon _{i}\) (and \(e_{i}\)) and space-time varying intercepts \(\phi _{it}\) (and \(f_{it}\)).
The linear and log-linear representations, generalizing Eqs. (1) and (2) to area-time, become
$$\begin{aligned} \mu _{it}=\varepsilon _{i}+\phi _{it}+\alpha _{it}y_{i,t-1}+\beta _{it} \underset{}{\sum _{j}}w_{ij}y_{j,t-1}+\gamma _{it}\mu _{i,t-1}+\delta _{it} \underset{}{\sum _{j}}w_{ij}\mu _{j,t-1}, \end{aligned}$$
$$\begin{aligned}&\log (\mu _{it})=\nu _{it}=e_{i}+f_{it}+a_{it}\log (y_{i,t-1}+1)+b_{it} \underset{}{\sum _{j}}w_{ij}\log (y_{j,t-1}+1)+c_{it}\nu _{i,t-1}\nonumber \\&+d_{it} \underset{}{\sum _{j}}w_{ij}\nu _{j,t-1}. \end{aligned}$$
In Eq. (3), the \(\{\varepsilon _{i},\phi _{it},\alpha _{it},\beta _{it},\gamma _{it},\delta _{it}\}\) are assumed positive under the identity link. Covariate effects can be included in the specifications for \( \varepsilon _{i}\) and or \(\phi _{it},\) and for \(e_{i}\) and \(f_{it},\) though arguably are more straightforwardly obtained under Eq. (4); see Fokianos and Tjøstheim (2011, page 564) regarding the time series case.
Assuming positive dependence on infection count lags is a reasonable prior assumption anyway, on subject grounds, as higher existing numbers of infected subjects typically generate more future infections. It is implausible that more infections in period t in area i generate less infections in period \(t+1\). In Eq. (4), assuming positivity of the autoregressive coefficients \((a_{it},b_{it},c_{it},d_{it})\) is also a reasonable assumption, for the same reason. In practice, one may use log, or logit, links to space or space-time random effects. For example, a log-link involving fully interactive space-time structured random effects, \(\psi _{it} \) (e.g. Lagazio et al. 2001, Eq. 4) on the lagged focus area infection counts is
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{it}, \end{aligned}$$
with an intercept \(\alpha _{0}\), and assuming the \(\psi _{it}\) are constrained for identifiability (e.g. zero centred or corner constrained). Similar schemes can be applied to the other autoregressive coefficients.
However, including lags in latent means in Eqs. (3) and (4) will typically increase computational intensivity, and a more tractable model is based only on lags in observed infection counts or log transformed infection counts. Hence, the linear and log-linear specifications become
$$\begin{aligned} \mu _{it}=\varepsilon _{i}+\phi _{it}+\alpha _{it}y_{i,t-1}+\beta _{it} \underset{}{\sum _{j}}w_{ij}y_{j,t-1}, \end{aligned}$$
$$\begin{aligned} \log (\mu _{it})=e_{i}+f_{it}+a_{it}\log (y_{i,t-1}+1)+b_{it}\underset{}{ \sum _{j}}w_{ij}\log (y_{j,t-1}+1). \end{aligned}$$
Also area-time fully interactive specifications for autoregressive coefficients may be subject to overparameterisation (Regis et al. 2021, page 6), and one may propose reduced coefficient schemes. For example, for the lag term on \(y_{i,t-1}\) in Eq. (6), one may take
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{i} \end{aligned}$$
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{t}. \end{aligned}$$
The option (8.1) is used in Paul and Held (2011), who assume \(\psi _{i}\) are spatially structured random effects.
Here we investigate the gains—in the context of predicting future COVID-19 counts—of an autoregressive specification with separate area and time effects, for example in the linear model,
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{1i}+\psi _{2t}, \end{aligned}$$
where \(\psi _{1i}\) is a spatially structured conditional autoregressive or CAR effect (Besag et al. 1991), and \(\psi _{2t}\) is a random walk in time. Both \(\psi _{1i}\) and \(\psi _{2t}\) are zero centred; for instance, such centering is automatically implemented in the BUGS car.normal function. This specification may provide greater adaptivity to rapidly changing infection counts in epidemic exponential and downturn phases, and avoids the heavy parameterisation of a fully interactive scheme.
Remaining effects
For the permanent terms \(\varepsilon _{i}\) and \(e_{i}\), one might use iid or spatially correlated random effects \(\kappa _{i}\) to represent enduring risk variations for infectious disease, in both endemic and epidemic phases. For example, taking iid effects, and with a positivity constraint,
$$\begin{aligned} \varepsilon _{i}=\exp (\kappa _{i}) \end{aligned}$$
where \(\kappa _{i}\sim \mathcal {N}(\theta _{0},\sigma _{\kappa }^{2})\) are permanent effects across times. These terms might also include constant effects of covariates \(X_{i}\). Thus for a single covariate
$$\begin{aligned} \kappa _{i}\sim \mathcal {N}(\theta _{0}+\theta _{1}X_{1i},\sigma _{\kappa }^{2}), \end{aligned}$$
where \(\theta =(\theta _{0},\theta _{1})\) are regression parameters.
For the general time terms \(\phi _{it}\) and \(f_{it}\), various specifications are possible. These might include Fourier series representations for an infectious disease with clear seasonal fluctuations (Paul and Held 2011), or a second degree polynomial (in days) in a COVID-19 application (Giuliani et al. 2020). The latter scheme is proposed as adapting to the exponential growth in the upturn phase of the epidemic. There is no conclusive evidence so far that COVID-19 is seasonal. For example, the UK first COVID-19 wave peaked in the spring and early summer of 2020. Some studies argue that COVID will eventually become seasonal (e.g. Greene 2011). However, there will likely still be considerable variation between areas in timing of COVID infections.
Here, we use area-specific first-order random walks to (a) represent trends not fully captured by the autoregressive effects on infection lags and (b) be geographically adaptive. Thus in Eq (6), we have
$$\begin{aligned} \phi _{it}=\exp (\eta _{it}) \end{aligned}$$
where \(\eta _{it}\thicksim \mathcal {N}(\eta _{i,t-1},\sigma _{\eta }^{2}).\) A corner constraint—setting selected parameter(s) to known values—is used for identifiability (Stegmueller 2014) and was less computationally intensive than centering samples at each iteration in the BUGS software. Thus, \(\phi _{it}=\exp (\eta _{it}^{\prime }),\) where \(\eta _{it}^{\prime }=\eta _{it}-\eta _{i1},\) which is equivalent to setting \(\eta _{i1}=0\) (Lagazio et al. 2001, page 29).
The area specific effects \(\eta _{it}\) will increase adaptivity. However, we also expect autoregressive coefficients including time effects, as in Eq. (9), to be adaptive to epidemic growth (and decay) phases. For example, in the growth phase with \(y_{i,t+1}\) typically much exceeding \(y_{it}\), the \( \psi _{2t}\) in Eq. (9) will tend to be higher in order to better predict increasing counts \(y_{i,t+1}\) in the next period.
The time varying terms \(\phi _{it}\) and \(f_{it}\) might also include time varying regression effects \(\theta _{t}\), or impacts of time varying covariates, including lagged covariates (e.g. Lowe et al. 2021).
Full model
In the case study analysis described below, we assume negative binomial sampling, with the linear model as in Eq (1) namely
and the log-linear, as in Eq (7), namely
Initially, we take \(w_{ij}\) to be first-order adjacency indicators: \(h_{ij}=1\) for areas i and j adjacent, and \(h_{ij}=0\) otherwise, with \( w_{ij}=h_{ij}/\underset{}{\sum _{j}}h_{ij}.\) The autoregressive coefficients are taken as
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{1i}+\psi _{2t} \end{aligned}$$
$$\begin{aligned} log(\beta _{it})=\beta _{0}+\psi _{3i}+\psi _{4t}, \end{aligned}$$
under the linear model, and
$$\begin{aligned} log(a_{it})=a_{0}+\psi _{5i}+\psi _{6t}, \end{aligned}$$
$$\begin{aligned} log(b_{it})=b_{0}+\psi _{7i}+\psi _{8t}, \end{aligned}$$
under the log-linear model. The parameters \(\{\psi _{1i},\psi _{3i},\psi _{5i},\psi _{7i}\}\) are spatial CAR effects (Besag et al. 1991), and \(\{\psi _{2t},\psi _{4t},\psi _{6t},\psi _{8t}\}\) are first-order random walks in time. The remaining effects are specified as
$$\begin{aligned} log(\varepsilon _{i})=\kappa _{1i}, \end{aligned}$$
$$\begin{aligned} log(\phi _{it})=\eta _{1it} \end{aligned}$$
$$\begin{aligned} \eta _{1it}\thicksim \mathcal {N}(\eta _{1i,t-1},\sigma _{\eta 1}^{2}), \end{aligned}$$
$$\begin{aligned} \kappa _{1i}\sim \mathcal {N}(\mu _{\kappa 1},\sigma _{\kappa 1}^{2}), \end{aligned}$$
in the linear model, and
$$\begin{aligned} e_{i}=\kappa _{2i} \end{aligned}$$
$$\begin{aligned} f_{it}=\eta _{2it}, \end{aligned}$$
in the log-linear model. The parameters \(\{\alpha _{0},\beta _{0},a_{0},b_{0},\mu _{\kappa 1},\mu _{\kappa 2}\}\) are fixed effects.
Out-of-sample forecasts \(\widetilde{y}_{i,T+s}\) for periods \(T+1,T+2,...,etc.\) , are based on extrapolating \(\psi _{2t},\psi _{4t},\) and \(\eta _{1it}\) (or analogous log-linear effects) to provide means \(\widetilde{\mu }_{i,T+s}\) (Sáfadi and Morettin 2003). Thus, one-step ahead predictions to \(T+1\) in the linear model are
$$\begin{aligned} \psi _{2,T+1}\thicksim \mathcal {N}(\psi _{2T},\sigma _{\psi 2}^{2}), \end{aligned}$$
$$\begin{aligned} \eta _{1i,T+1}\thicksim \mathcal {N}(\eta _{1i,T},\sigma _{\eta 1}^{2}), \end{aligned}$$
and these are incorporated in Eq. (6) to provide \(\mu _{i,T+1}\) from which forecast cases at \(T+1\) can be sampled.
Spatial weighting
There has been discussion on how to weight the contribution of neighbouring areas in the spatial lags, with proposals including a power law that has declining weights for second, third, etc., nearest neighbours (Cheng et al. 2016; Meyer and Held 2014). Here, we allow for an infection overspill effect from both first- and second-order neighbours in a sensitivity analysis.
Thus, first-order neighbours are assigned weights \(h_{ij1}=1\) for adjacent areas, and \(h_{ij1}=0\) otherwise; while second-order neighbours are assigned weights \(0<\lambda <1,\) so that \(h_{ij2}=\lambda \) for areas i and j which are second-order neighbours, and \(h_{ij2}=0\) otherwise. Then,
$$\begin{aligned} w_{ij}=\frac{(h_{ij1}+h_{ij2})}{\underset{j}{\sum }(h_{ij1}+h_{ij2})}. \end{aligned}$$
Space-time clusters
A range of methods have been proposed to assess space-time clustering (e.g. Chen et al. 2016; Mclafferty 2015). Here, we consider the LISA (Local Indicators of Spatial Association) indicator of spatial clustering in infection risk at one time point (Anselin 1995) and extend it to assess extended spatial clustering over various temporal windows—multiple successive time units (here these are successive weeks). A particular aim is to detect spatial clustering during the exponential ascent phase of the epidemic wave. Hence, one can assess where the epidemic phase, and its associated health care burden, is geographically concentrated.
Define predicted COVID case rates \(r_{it}=\mu _{it}/P_{i}\), where \(P_{i}\) are area populations. Predicted rates could also be defined for out-of-sample periods, with \(\widetilde{r}_{it}=\widetilde{\mu }_{it}/P_{i},\) \(t=T+1,T+2,...etc.,\) to predict future space-time risk patterns.
For a particular week define cluster indicators \(J_{it}=1\) if own area rates \(r_{it}\), and average rates in the locality \(r_{it}^{L}=\underset{}{ \sum _{j\ne i}}w_{ij}r_{jt}/\underset{}{\sum _{j\ne i}}w_{ij},\) are both elevated. This is known as a high-high cluster in LISA terminology. If either or both of these conditions do not hold, then \(J_{it}=0\).
Here, we define elevated rates as those more than 50% above the region wide or national rate—here the rate for the Greater South East, namely \(R_{t}= \underset{}{\sum _{i}}\mu _{it}/\underset{i}{\sum }P_{i}.\) So \(J_{it}=1\) if \( J_{1it}=J_{2it}=1\) where
$$\begin{aligned} J_{1it}=I(r_{it}>1.5R_{t}), \end{aligned}$$
\(J_{2it}=I(r_{it}^{L}>1.5R_{t}),\)
and where \(I(C)=1\) if the comparison C is true, 0 otherwise.
Elevated rates through D successive weeks define a space-time cluster. So if \(D=5\), a space-time cluster of length D would require \( J_{it}=J_{i,t+1}=J_{i,t+2}=J_{i,t+3}=J_{i,t+4}=1\). Using MCMC sampling one can obtain the probability that area i at week t defines a space-time cluster of length D.
Bayesian estimation uses the BUGS (Bayesian inference Using Gibbs Sampling) program (Lunn et al. 2009). Two chains of 20,000 iterations are taken, with inferences from the last 10,000, and convergence checks as in Brooks and Gelman (1998).
Gamma priors with shape one, and rate 0.001, are adopted on inverse variance parameters and on the negative binomial overdispersion parameter \( \Omega \), while normal \(\mathcal {N}(0,100)\) priors are assumed on fixed effects \(\{\alpha _{0},\beta _{0},a_{0},b_{0},\mu _{\kappa 1},\mu _{\kappa 2}\}.\) A beta(1,1) prior is assigned to \(\lambda \) in the analysis including second-order neighbours.
Dataset and geographical setting
The data for the study consist of weekly totals of new COVID cases in a subregion of the UK. The time span considered starts at the week 19-26 July 2020 (constituting week 1), with one analysis considering the subsequent 24 weeks, and another considering 29 weeks through to the week 31 January-6 February, 2021. In July 2020, new COVID cases across the entire UK averaged under 700 daily, whereas towards the end of 2020, there was a pronounced increase, with some days reaching over 75 thousand; however, in early 2021, there was a tailing off in new cases. See Fig. 1 for daily UK data, which includes a loess smooth. The epidemic ascent phase is irregular, with an early lesser peak in October and early November 2020, a slight tailing off in new cases in early December 2020, then a major increase in late December and January 2021.
Daily New Cases across the UK. July 2020 to February 2021
The analysis here considers part of England, namely three standard regions (London, South East, East) combined to give a broad region, here termed the Greater South East (GrSE for short), with a population of 24.4 million. Figure 2 shows weekly totals of new cases in this region. Starting at under 1,500 weekly, they rose to over 200,000 at the epidemic peak (on week 24) but then fell back sharply. As for the entire UK, there is a minor peak at week 17, preceding the main epidemic wave. There are \(N=144\) areas in the region, administrative areas called local authorities.
Weekly Totals of New COVID-19 Cases, Greater South East, July 2020 to February 2021
This part of England contains the epicentre of a localized cluster associated with a new variant (the Kent variant, or B.1.1.7 variant) (Challen et al. 2021). The surge in new cases associated with this cluster was the precursor to the larger national UK-wide escalation of cases. The outbreak of the new variant was concentrated in areas to the east of London (in Kent and Essex counties) and in the North East of London itself.
Model evaluations
As a first evaluation of alternative model forms, we make out-of-sample predictions for cases at weeks 24 and 29 across the Greater South East. The forecasts are based on training data for weeks 1-23, and weeks 1-28, respectively (so \(T=23\) and \(T=28\) respectively). Week 24 followed the ascent phase, when new cases of infection were sharply increasing, and in fact infections peaked in week 24. Week 29 was in a phase of sharp decline in new cases.
In a first analysis, a comparison between two different autoregressive formulations (M1 and M2) is made. Both specifications condition on the first week (\(t=1\)). Both specifications also assume a linear model, as in Eqs (6) and (12), namely
$$\begin{aligned} \mu _{it}=\varepsilon _{i}+\phi _{it}+\alpha _{it}y_{i,t-1}+\beta _{it} \underset{}{\sum _{j}}w_{ij}y_{j,t-1},\qquad \qquad t=2,...,T, \end{aligned}$$
\(log(\varepsilon _{i})=\kappa _{1i},\)
\(log(\phi _{it})=\eta _{1it},\)
$$\begin{aligned} \kappa _{1i}\sim \mathcal {N}(\mu _{\kappa 1},\sigma _{\kappa 1}^{2}). \end{aligned}$$
In the first specification (M1), the autoregressive coefficients \(\alpha _{it}\) and \(\beta _{it}\) are taken as spatially, but not temporally, varying:
$$\begin{aligned} log(\alpha _{it})=\alpha _{0}+\psi _{1i}, \end{aligned}$$
\(log(\beta _{it})=\beta _{0}+\psi _{3i},\)
In the second (M2), the autoregressive coefficients are taken as both space and time varying
\(log(\beta _{it})=\beta _{0}+\psi _{3i}+\psi _{4t}.\)
The parameters \(\{\psi _{1i},\psi _{3i}\}\) are CAR effects (Besag et al. 1991), with \(h_{ij}=1\) for adjacent areas (\(h_{ij}=0\) otherwise), while \( \{\psi _{2t},\psi _{4t}\}\) are first-order random walks in time.
One-step ahead out-of-sample forecasts \(\widetilde{y}_{i,T+1}\) for week \(T+1\) (either week 24 or week 29) are based on extrapolating \(\psi _{2t},\psi _{4t},\) and \(\eta _{1it}\) to week \(T+1\).
Two subsequent analyses are made. In the first, we compare the best performing from M1 and M2 with its log-linear equivalent (M3). In the second analysis, we allow the spatial interaction weights \(w_{ij}\) to include both first- and second-order neighbours—this defines model M4. Both these analyses are for the case when \(T=23,\) and out-of-sample predictions are to week 24.
Assessing performance
Out-of-sample predictive performance is based on whether the 95% credible interval for predicted new cases \(\widetilde{y}_{\bullet ,T+1}\) (summing across 144 areas in the GrSE) in week \(T+1\) contains the actual number of new cases \(y_{\bullet ,T+1}\). An indicator of this is the posterior probability
$$\begin{aligned} \zeta =Pr(\widetilde{y}_{\bullet ,T+1}>y_{\bullet ,T+1}|y) \end{aligned}$$
that one-step ahead predicted cases exceed actual new cases. Tail probabilities (e.g. under 0.1 or over 0.9) represent under or over-prediction of actual cases. These probabilities can be obtained for individual areas, namely
$$\begin{aligned} \zeta _{i}=Pr(\widetilde{y}_{i,T+1}>y_{i,T+1}|y). \end{aligned}$$
Also considered is the ranked probability score, with abbreviation \( RPS_{T+1} \) Czado et al. (2009), which measures the accuracy of forecasts (in matching actual outcomes) when expressed as probability distributions. In a Bayesian context, the latter will be sampled values from posterior predictive densities for the outcome, \(p(\widetilde{y}|y).\) For area i, the ranked probability score is obtained by monitoring
$$\begin{aligned} \left| \widetilde{y}_{i,T+1}-y_{i,T+1}\right| +\left| \widetilde{y}_{i,T+1}-\overset{\thickapprox }{y}_{i,T+1}\right| \end{aligned}$$
where \(\overset{\thickapprox }{y}_{i,T+1}\) is an independent draw for the posterior predictive density. The second term is a penalty for uncertainty, which increases as does predictive variance. Lower \(RPS_{T+1}\) values represent better fit.
To assess fit for the observed (training) data, we obtain the widely applicable information criterion (WAIC) (Watanabe 2010), and also RPS scores for one-step ahead predictions, based on infections in the previous week. The RPS scores can be aggregated over areas for separate weeks, \( RPS_{t}\) \((t=2,...,T),\) to show where particular models are better or worse fitting.
Predictive performance of space-time autoregression model
Table 1 compares the out-of-sample performance of models M1 and M2 for weeks 24 and 29, based, respectively, on training data for weeks 1-23 and 1-28. Table 2 compares model fit for the training data analysis, as well as predictive performance for one-step ahead predictions within the sample.
Table 1 Out-of-Sample Predictions, Models M1 and M2 Compared
It can be seen from Table 1 that a model including time effects in the autoregressions on previous cases leads to improved out-of-sample predictions. The credible intervals under M2 for predicted new cases in weeks 24 and 29 comfortably include the actual total GrSE cases; though the M2 estimates of total cases are less precise and show some skew (posterior means exceeding medians). The mean RPS score under M2 also shows the effects of skewness, especially for the forecast to \(T+1=24\); the median values favour M2.
Table 2 In-Sample (Training Data) Fit, and One-Step Ahead In-Sample Predictions, Models M1 and M2
The probabilities \(\zeta \) in Eq. (21) indicate that model M1 underpredicts new cases at week 24; this week was in fact the peak of the epidemic, following weeks when actual cases were rapidly increasing. By contrast, in the downturn phase, at week 29, model M1 overpredicts new cases. Area specific probabilities \(\zeta _{i}\), as in Eq. (22), show higher totals of local authority areas with cases under or overpredicted under M1, especially in the downturn phase.
Table 2 shows that model M2 has a lower in-sample WAIC than model M1 in both training data analyses. One-step ahead predictions within the observed data periods also favour M2. For example, the total RPS for M1, accumulated over weeks 1-28, is around twice that for M2 (1.12 million vs 559 thousand). Some weeks show greater discrepancies between the models.
Table 3 compares the two models against information on changing infection totals (weekly totals across GrSE) for the analysis of weeks 1-28. Comparing \(RPS_{t}\) between models M1 and M2 (first three columns of Table 3) shows that model M1 has problematic fit in the irregular ascent phase (weeks 16-19 when cases rise then fall back again), and also, more markedly, in the epidemic descent phase (weeks 26 onwards), when the \(RPS_{t}\) under M1 is more than three times that of M2.
Table 3 Comparative Fit by Week, Models M1 and M2, Weeks 1-28
The last two columns of Table 3 and Fig. 3 show how the \(\psi _{2t}\) in model M2 adapt to the minor early peak at week 17, and then to sharply increasing cases in the exponential epidemic phase. They then decrease in line with the epidemic downturn.
Autoregressive Time Parameters and Total Infections (in units of 100,000)
Evaluating other model options
Table 4 compares linear and log-linear specifications (denoted M2 and M3) with space-time autoregressive effects, where the log-linear model M3 is defined by Eqs. (7), (13) and (15). This comparison is for weeks 1-23 as training data, and prediction ahead to week 24. For M3, we find a slight deterioration in fit to the training data and also a slight deterioration in out-of-sample prediction—though the latter is still satisfactory. However, skewness in the posterior density of \(\widetilde{y}_{\bullet ,T+1}\) is increased in M3 as against M2.
Table 4 Out-of-Sample Predictions and In-sample Fit, Models M2, M3 and M4 Compared, \(T = \) 23
Another version of the linear model is also considered (as M4), with spatial weights \(w_{ij}\) including second-order as well as first-order neighbours—as per Eq. (16). For model M4, we find no gain in fit over model M2 using first-order neighbours only. The out-of-sample prediction is satisfactory though, with no evidence of under or overprediction of cases in week \(T+1\). The posterior median estimate of \(\widetilde{y}_{.,T+1},\) namely new cases in week \(T+1\) across the greater South East, is 211,272 compared to the actual total of 210,099. The \(\lambda \) parameter has mean 0.76 with 95% credible interval (0.40, 0.99).
Detecting significant space-time clusters
Space-time clustering in infectious disease outbreaks is important in identifying the epicentre(s) of an outbreak. Space-time cluster prediction, for example to assess continued excess spatial clustering in future periods, is important in prioritizing interventions.
The "Kent variant" of COVID-19 (code B117), also known as the "English variant", emerged in late 2020 in specific parts of England, namely areas to the East and South East of London. The observed data suggest a localized surge of COVID-19 cases in November 2020 in these locations, which preceded the generalized national second wave epidemic peaking in late December of 2020 and early January of 2021. In terms of the weeks considered in the present study, we would expect significant space-time clustering in weeks 17-22, namely November 2020 and early December 2020.
We obtain—under model M2—area specific probabilities of D successive periods with excess incidence in both focus areas and their localities. Excess incidence is taken as more than 50% above the average (modelled) rate for the entire region, namely the Greater South East. Assuming \(D=5\), then for a single MCMC iteration (\(s=1,...,S\)), one requires for area i to be a space-time cluster of length 5 that \( J_{it}^{(s)}=J_{i,t+1}^{(s)}=J_{i,t+2}^{(s)}=J_{i,t+3}^{(s)}=J_{i,t+4}^{(s)}=1. \) One then obtains estimated posterior probabilities of such a sequence occurring, by accumulating over MCMC iterations.
Focussing on weeks 17-22, we find only one area with a posterior probability exceeding 0.9 of being centre of a persistent space-time cluster of length 5 weeks. However, considering persistent clusters of length \(D=4\) weeks, there are seven areas with probabilities over 0.9, and eight areas with probabilities over 0.8. Figure 4 shows the estimated probabilities for \(D=4\) across the Greater South East of England, with a sharp delineation apparent between the "Kent variant" epicentre, and other areas. Figure 5 shows in closer detail the areas in the epicentre. The Swale local authority, with a posterior probability of one, was among the Kent local authorities first affected by the new variant (Reuters 2021).
Posterior Probabilities of Space-Time Cluster of Length Four Weeks During Epidemic Ascent Phase, Local Authorities, Greater South East of England
Posterior Probabilities of Space-Time Cluster of Length Four Weeks. Detailed Focus
Of interest also are forecasts of clustering status. We consider training data for the first \(T=23\) weeks to make one-step ahead predictions of clustering in week 24. So cluster indicators \(J_{i,T+1}=1\) if own area rates \(\widetilde{r}_{i,T+1}=\widetilde{\mu }_{i,T+1}/P_{i}\), and average rates in the locality \(\widetilde{r}_{i,T+1}^{L}=\underset{}{\sum _{j\ne i}}w_{ij} \widetilde{r}_{j,T+1}/\underset{}{\sum _{j\ne i}}w_{ij},\) are both elevated as compared to the region wide rates, \(\widetilde{r}_{T+1}=\underset{}{ \sum _{i}}\widetilde{\mu }_{i,T+1}/\underset{}{\sum _{i}}P_{i}.\) Rates more than 50% above the region wide rate are considered elevated.
In such short-term forecasting, one may compare predicted future clustering with "actual" clustering defined by observed disease counts. Thus, actual rates for area i are \(y_{i,T+1}/P_{i}\), with corresponding locality averages and region-wide rates; these are reliable point estimators for large disease counts. In fact, seven of the 144 areas are identified as actual cluster centres at week 24, the epidemic peak. Predicted and actual cluster status are compared using a \(2\times 2\) table accumulating correct classifications along the diagonal (areas where both actual and predicted cluster status are the same). We can then assess sensitivity, the proportion of actual high-high cluster centres correctly identified, and specificity, the proportion of non-cluster areas correctly identified.
Under model M2, we obtain posterior mean sensitivity (with 95% credible interval) of 0.93 (0.43,1.0), and posterior mean specificity of 0.965 (0.95,0.985). The model prediction is for slightly higher numbers of cluster centres than is actually the case (false positives, with posterior mean 4.8), and this reduces specificity. False negatives are infrequent, with posterior mean 0.5. Using the relationship accuracy = (sensitivity)(prevalence) + (specificity)(1 - prevalence), where the prevalence of high-high clustering is 7/144, one obtains an accuracy of around 0.964.
Covariate effects
There have been many studies on socio-demographic and environmental risk factors for COVID outcomes. Both incidence and mortality have been linked to area deprivation, urbanicity, poor air quality, and nursing home location (as area risk factors), and non-white ethnicity, and existing medical conditions (as individual risk factors). Impacts of such risk factors were clearly observable in the UK first wave of the COVID pandemic, concentrated in March to May of 2020 (Public Health England 2020; O'Dowd 2020; Quinio 2021; Dutton 2020).
The second UK wave is distinct from the first, in being strongly linked to the emergence of a new virus strain, and by the form of geographic clustering associated with the new strain (see section 5.3), namely a concentration in non-metropolitan areas in the south east of England, areas with relatively low concentrations of ethnic groups and area deprivation. This may tend to attenuate or distort the effect of area predictors \(X_{i}\), so that although their inclusion may improve fit and predictions, the substantive rationale for including them—as disease risk factors per se—is in doubt.
To illustrate this potential for distortion, we estimate a time-varying effect of rurality on COVID infection rates. Rurality in each local authority (LA) is measured by the proportion of micro-areas (lower super output areas) within each LA that are classified as rural towns or villages (Office of National Statistics 2013, Table 1b). One would expect rural areas, with lower population densities, to have lower infection and mortality rates (Lai et al. 2020). Matheson et al. (2020) attribute excess urban mortality (in the UK first COVID wave) to higher population density and association, more people-facing occupations in cities, and greater home overcrowding.
To establish its role for the second wave data, a regression analysis (with \( T=28\) weeks) is carried out with a time varying effect of rurality (\(X_{i}\) ), using the log-linear model. Thus, Eq (6) is extended to include an additive term \(\theta _{t}X_{i},\) where \(\theta _{t}\) is a first-order random walk, with prior \(\theta _{t}\sim \mathcal {N}(\theta _{t-1},\sigma _{\theta }^{2}).\) We find an irregular effect on infection rates, with \( \theta _{t}\) significantly negative in the early weeks of the study period, significantly positive in some later weeks, and often non-significant, with 95% intervals including zero—see Figure 6.
Time Varying Effect of Rurality, \({\theta}_{t}\)
Discussion and future research
Forecasts of future infectious disease incidence, especially with spatial disaggregation, are important for policy purposes in epidemic situations. There are benefits in a longitudinal model form which borrows strength over areas since incidence levels tend to be spatially clustered—an example being the geographically concentrated COVID-19 outbreak associated with the "Kent variant" in the UK. Subsequent epidemic diffusion will also be influenced by spatial proximity. Hence, several models in the literature allow spatially varying autoregressive effects, and spatially varying dependence on infection levels in nearby areas.
However, temporal adaptivity and forecasting performance may be improved by allowing for time variation in the epidemic path, for example through space-time autoregressive dependencies. An econometric perspective on autoregressive dependence allowing for both heterogeneity over units and over time is provided by Regis et al. (2021), though they suggest (Regis et al. 2021, p. 6)—from a classical estimation perspective—that a full unit-time random effect structure may be overparameterized.
A full spatio-temporal structure may be applied when longitudinal data cover a relatively short period and made identifiable subject to appropriate constraints. Thus, (Watson et al. 2017)—using a Bayesian perspective—consider area data on Lyme disease over \(T=5\) years. They use the first four years to predict the last, using a full spatio-temporal autoregressive scheme allowing both spatial and temporal correlation.
However, over a longer set of time points, there would be a heavy parameterisation in a fully interactive scheme. In the present application, fully interactive autoregressive effects as in Eq. (5), and other space-time parameters as in Eq. (11), would involve 3NT unknown random effects (i.e. three times the number of data points). By contrast, the newly proposed space-time model—for example, in Eq. (9)—involves considerably fewer, \(NT+2(N+T)\), random effects. A fully interactive specification would also limit the form of the time dependence in autoregression that can be considered; for example, a low order polynomial in time might be used for \( \{\psi _{2t},\psi _{4t}\}\) in Eq. (12), instead of a random walk in time. Finally, with the separate space and time effects, as in Eq. (9), their distinct contribution to improved fit and forecasts can be assessed, and interpretability is straightforward.
In the present study, over a longitudinal series of nearly 30 time points, the parsimonious space-time autoregressive representation provides improved one-step ahead forecasts as compared to a model allowing spatially varying autoregressive dependence only. The latter model is shown to underpredict new cases at in the ascent phase of the epidemic (in November and early December 2020 for the UK second COVID-19 wave), when actual cases were rapidly increasing. By contrast, in the downturn phase, the model with only spatial variation in autoregressive effects provides an overprediction of new cases.
Other substantive features of infectious epidemics have been be investigated, such as the location of prolonged space-time clusters. In the Greater South East of England, there is a clearly demarcated epicentre for the outbreak in the epidemic ascent phase (see Fig. 4).
Drawing on the time series literature on random coefficient autoregression, we have set out alternative linear and log-linear specifications applicable to the area-time situation. For the particular infectious disease data concerned, the linear model had a better fit, but further research on similar forms of data (including longitudinal area data on chronic as well as infectious disease, and indeed any form of longitudinal area count data) is indicated to establish the comparative strengths of the linear and log-linear forms. The above analysis has not considered the full scope of possible autoregressive dependence—including lags on latent means for both the focus area and its locality—as in Eqs. (3) and (4). Such a model was not tractable in the software used here. Extensions may also be envisaged to higher order lags, such as spatio-temporal AR1 (lag 1) and AR2 (lag 2) dependence for both the focus area and its surrounding locality in Eqs. (6) and (7).
Given that the COVID pandemic has typically involved multiple waves, one might also be interested in longitudinal modelling over two or more waves, for instance to compare area-specific infection rates at epidemic peaks. The method used here is more easily applied to multiwave data than one involving area specific phenomenological models (e.g. logistic, Richards) which would necessitate using latent switching parameters between waves.
Another generalisation is to related outcomes such as mortality and hospitalisations. This could involve generalisations of the linear and log-linear count regression specifications—such as Eqs. (6) and (7)—to include borrowing strength over space, time and outcomes. This would be combined with multiple outcome count regression (Poisson or negative binomial). Alternatively, conditioning on modelled infections, one could model case fatality and hospitalisation as binomial responses .
Anselin L (1995) Local indicators of spatial association–LISA. Geogr Anal 27(2):93–115
Besag J, York J, Mollié A (1991) Bayesian image restoration with two applications in spatial statistics. Ann Inst Statist Math 43(1):1–59
Brooks S, Gelman A (1998) General methods for monitoring convergence of iterative simulations. J Comput Gr Stat 7(4):434–455
Burra P, Soto-Díaz K, Chalen I, Gonzalez-Ricon R, Istanto D, Caetano-Anollés G (2021) Temperature and latitude correlate with SARS-CoV-2 epidemiological variables but not with genomic change worldwide. Evol Bioinf 17:1176934321989695
Challen R, Brooks-Pollock E, Read J, Dyson L, Tsaneva-Atanasova K, Danon L (2021) Risk of mortality in patients infected with SARS-CoV-2 variant of concern 202012/1: matched cohort study. British Medical Journal, 372. https://www.bmj.com/content/372/bmj.n579
Chattopadhyay S, Maiti R, Das S, Biswas A (2021) Change-point analysis through INAR process with application to some COVID-19 data. Statistica Neerlandica (in press)
Chen C, Teng Y, Lin B, Fan I, Chan T (2016) Online platform for applying space-time scan statistics for prospectively detecting emerging hot spots of dengue fever. Int J Health Geogr 15(1):1–9
Cheng Q, Lu X, Wu J, Liu Z, Huang J (2016) Analysis of heterogeneous dengue transmission in Guangdong in 2014 with multivariate time series model. Scientif Rep 6(1):1–9
Chintalapudi N, Battineni G, Amenta F (2020) COVID-19 disease outbreak forecasting of registered and recovered cases after sixty day lockdown in Italy: a data driven model approach. J Microbiol, Immunol Infect 53(3):396–403
Clements A, Lwambo N, Blair L, Nyandindi U, Kaatano G, Kinung'hi S, Webster J, Fenwick A, Brooker S (2006) Bayesian spatial analysis and disease mapping: tools to enhance planning and implementation of a schistosomiasis control programme in Tanzania. Trop Med Int Health 11(4):490–503
Coly S, Garrido M, Abrial D, Yao A (2021) Bayesian hierarchical models for disease mapping applied to contagious pathologies. PloS One 16(1):0222898
Czado C, Gneiting T, Held L (2009) Predictive model assessment for count data. Biometrics 65(4):1254–1261
Dowdy D, Golub J, Chaisson R, Saraceni V (2012) Heterogeneity in tuberculosis transmission and the role of geographic hotspots in propagating epidemics. Proc Nat Acad Sci 109(24):9557–9562
Dutton A (2020) Coronavirus (COVID-19) related mortality rates and the effects of air pollution in England. Office of National Statistics, London, UK
Fokianos K (2011) Some recent progress in count time series. Statistics 45(1):49–58
Fokianos K, Tjøstheim D (2011) Log-linear Poisson autoregression. J Multivar Anal 102(3):563–578
Gecili E, Ziady A, Szczesniak R (2021) Forecasting COVID-19 confirmed cases, deaths and recoveries: revisiting established time series modeling through novel applications for the USA and Italy. PloS One 16(1):e0244173
Giuliani D, Dickson M, Espa G, Santi F (2020) Modelling and predicting the spatio-temporal spread of COVID-19 in Italy. BMC Infect Dis 20(1):1–10
Glaser S. (2017) A review of spatial econometric models for count data. Hohenheim Discussion Papers in Business, Economics and Social Sciences, No. 19–2017
Glaser S, Jung R, Schweikert K (2021) Spatial Panel Count Data Models: Modeling and Forecasting of Urban Crimes. Available at SSRN 3701642
Greene W (2011) Econometric analysis, 7th edn. Prentice Hall, USA
Haining R, Li G (2021) Spatial Data and Spatial Statistics. In"Handbook of Regional Science", pp 1961-1983, Springer, eds Fischer, M, Nijkamp, P
Hay J, Pettitt N (2001) Bayesian analysis of a time series of counts with covariates: an application to the control of an infectious disease. Biostatistics 2(4):433–44
Heinen A. (2003). Modelling time series count data: an autoregressive conditional Poisson model. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=1117187
Hunter P, Brainard J, Grant A (2021) The impact of the November 2020 English national lockdown on COVID-19 case counts. https://www.medrxiv.org/content/10.1101/2021.01.03.21249169v1
Jalilian A, Mateu J (2021) A hierarchical spatio-temporal model to analyze relative risk variations of COVID-19: a focus on Spain, Italy and Germany. Stoch Environ Res Risk Assess 35:797–812
Johndrow J, Lum K, Gargiulo M, Ball P (2020) Estimating the number of SARS-CoV-2 infections and the impact of social distancing in the United States. https://arxiv.org/abs/2004.02605
Kang S, Cramb S, White N, Ball S, Mengersen K (2016) Making the most of spatial information in health: a tutorial in Bayesian disease mapping for areal data. Geosp Health 11(2):190–198
Lagazio C, Dreassi E, Biggeri A (2001) A hierarchical Bayesian model for space-time variation of disease risk. Stat Modell 1(1):17–29
Lai K, Webster C, Kumari S, Sarkar C (2020) The nature of cities and the Covid-19 pandemic. Curr Opin Environ Sustain 46:27–31
Lawson A, Song H (2010) Bayesian hierarchical modeling of the dynamics of spatio-temporal influenza season outbreaks. Spatial Spatio-temp Epidemiol 1(2–3):187–195
Lowe R, Lee S, O'Reilly K et al (2021) Combined effects of hydrometeorological hazards and urbanisation on dengue risk in Brazil: a spatiotemporal modelling study. Lancet Planet Health 5:e209-19
Lunn D, Spiegelhalter D, Thomas A, Best N (2009) The BUGS project: Evolution, critique and future directions. Stat Med 28(25):3049–67
Maleki M, Mahmoudi M, Wraith D, Pho K (2020) Time series modelling to forecast the confirmed and recovered cases of COVID-19. Travel Med Infect Dis 13:101742
Martines M, Ferreira R, Toppa R, Assunção L, Desjardins M, Delmelle E (2021) Detecting space-time clusters of COVID-19 in Brazil: mortality, inequality, socioeconomic vulnerability, and the relative risk of the disease in Brazilian municipalities. J Geogr Syst 23(1):7–36
Matheson J, Nathan M, Pickard H, Vanino E (2020) Why has coronavirus affected cities more than rural areas? Economics Observatory, https://www.coronavirusandtheeconomy.com/
Mclafferty S (2015) Disease cluster detection methods: recent developments and public health implications. Annal GIS 21(2):127–133
Meyer S, Held L (2014) Power-law models for infectious disease spread. Annal Appl Stat 8(3):1612–1639
Mitze T, Kosfeld R (2021) The propagation effect of commuting to work in the spatial transmission of COVID-19. J Geogr Syst. https://doi.org/10.1007/s10109-021-00349-3
O'Dowd A (2020) Covid-19: People in most deprived areas of England and Wales twice as likely to die. BMJ: British Med J. https://doi.org/10.1136/bmj.m2389
Office of National Statistics (2013) Urban and Rural Area Definitions for Policy Purposes in England and Wales: User Guide. Government Statistical Service, London
Paul M, Held L (2011) Predictive assessment of a non-linear random effects model for multivariate time series of infectious disease counts. Stat Med 30(10):1118–1136
Petropoulos F, Makridakis S (2020) Forecasting the novel coronavirus COVID-19. PLoS One. https://doi.org/10.1371/journal.pone.0231236
Petukhova T, Ojkic D, McEwen B, Deardon R, Poljak Z (2018) Assessment of autoregressive integrated moving average (ARIMA), generalized linear autoregressive moving average (GLARMA), and random forest (RF) time series regression models for predicting influenza A virus frequency in swine in Ontario Canada. PLoS One 13(6):0198313
Public Health England (2020) Disparities in the risk and outcomes of COVID-19. PHE, London, 2020
Qiu H, Zhao H, Xiang H, Ou R, Yi J, Hu L, Ye M (2021) Forecasting the incidence of mumps in Chongqing based on a SARIMA model. BMC Publ Health 21(1):1–12
Quinio V (2021) Have UK cities been hotbeds of the Covid-19 pandemic? Centre for Cities. https://www.centreforcities.org/blog/have-uk-cities-been-hotbeds-of-covid-19-pandemic
Regis M, Serra P, Heuvel E (2021) Random autoregressive models: a structured overview. Econ Rev 2021:1–24
Reuters (2021) A Reuters Special Report. The Fatal Shore. https://www.reuters.com/investigates/special-report/health-coronavirus-uk-variant/
Richards F (1959) A flexible growth function for empirical use. J Exper Bot 10(2):290–301
Roda W, Varughese M, Han D, Li M (2020) Why is it difficult to accurately predict the COVID-19 epidemic? Infect Dis Modell 5:271–281
Roosa K, Lee Y, Luo R, Kirpich A, Rothenberg R, Hyman J, Yan P, Chowell G (2020) Short-term forecasts of the COVID-19 epidemic in Guangdong and Zhejiang China. J Clin Med 9(2):596
Rui R, Tian M, Tang M, Ho G, Wu C (2021) Analysis of the spread of COVID-19 in the USA with a spatio-temporal multivariate time series model. Int J Environ Res Publ Health 18(2):774
Sáfadi T, Morettin P (2003) A Bayesian analysis of autoregressive models with random normal coefficients. J Stat Comput Simul 73(8):563–573
Sartorius B, Lawson A, Pullan R (2021) Modelling and predicting the spatio-temporal spread of COVID-19, associated deaths and impact of key risk factors in England. Scientif Rep 11(1):1–11
Schweikert K, Huth M, Gius M (2021) Detecting a copycat effect in school shootings using spatio-temporal panel count models. Contemp Econ Policy. https://doi.org/10.1111/.coep.12532
Shand L, Li B, Park T, Albarracín D (2018) Spatially varying auto-regressive models for prediction of new human immunodeficiency virus diagnoses. J Royal Stat Soc: Ser C (Appl Stat) 67(4):1003–1022
Shinde G, Kalamkar A, Mahalle P, Dey N, Chaki J, Hassanien A (2020) Forecasting models for coronavirus disease (COVID-19): a survey of the state-of-the-art. SN Computer Sci 1(4):1–15
Stegmueller D (2014) Bayesian hierarchical age-period-cohort models with time-structured effects: an application to religious voting in the US, 1972–2008. Elect Stud 33:52–62
Stehlík M, Kiseľák J, Dinamarca MA, Li Y, Ying Y (2020) On COVID-19 outbreaks predictions: issues on stability, parameter sensitivity, and precision. Stoch Anal Appl 39(3):383–4
Watanabe S (2010) Asymptotic equivalence of Bayes cross validation and Widely Applicable information Criterion in singular learning theory. J Mach Learn Res 11:3571–3594
Watson S, Liu Y, Lund R, Gettings J, Nordone S, McMahan C, Yabsley M (2017) A Bayesian spatio-temporal model for forecasting the prevalence of antibodies to Borrelia burgdorferi, causative agent of Lyme disease, in domestic dogs within the contiguous United States. PLoS One 12(5):e0174428
World Health Organization (WHO) (2021) Tracking SARS-CoV-2 Variants. WHO, Geneva. https://www.who.int/en/activities/tracking-SARS-CoV-2-variants/
Yu X (2020) Risk interactions of coronavirus infection across age groups after the peak of COVID-19 epidemic. Int J Environ Res Publ Health 17(14):5246
Zhang Y, Wang X, Li Y, Ma J (2019) Spatiotemporal analysis of influenza in China, 2005–2018. Scientif Rep 9:19650
Zhou M, Li L, Dunson D, Carin L (2012) Lognormal and gamma mixed negative binomial regression. Proc Int Conf Mach Learn 2012:1343–1350
School of Geography, Queen Mary University of London, Mile End Rd, London, E1 4NS, UK
Peter Congdon
Correspondence to Peter Congdon.
Code and Data availability
BUGS code and data for replication purposes are available at https://figshare.com/articles/software/SPATIOTEMP_REPLICATION_VERSION_txt/16628167
Congdon, P. A spatio-temporal autoregressive model for monitoring and predicting COVID infection rates. J Geogr Syst 24, 583–610 (2022). https://doi.org/10.1007/s10109-021-00366-2
Issue Date: October 2022
Autoregressive
Bayesian
Mathematics Subject Classification
|
CommonCrawl
|
The non-Riemannian dislocated crystal: A tribute to Ekkehart Kröner (1919-2000)
JGM Home
Hamiltonian mechanical systems on Lie algebroids, unimodularity and preservation of volumes
September 2010, 2(3): 265-302. doi: 10.3934/jgm.2010.2.265
When is a control system mechanical?
Sandra Ricardo 1, and Witold Respondek 2,
Department of Mathematics, School of Sciences and Technology, University of Trás-os-Montes e Alto Douro, 5001-801 Vila Real, Portugal
INSA-Rouen, Laboratoire de Mathématiques, 76801 Saint-Etienne-du-Rouvray, France
Received May 2010 Published November 2010
In this work we present a geometric setting for studying mechanical control systems. We distinguish a special class: the class of geodesically accessible mechanical systems, for which the uniqueness of the mechanical structure is guaranteed (up to an extended point transformation). We characterise nonlinear control systems that are state equivalent to a system from this class and we describe the canonical mechanical structure attached to them. Several illustrative examples are given.
Keywords: Mechanical control systems, mechanical state equivalence, geodesic accessibility, symmetric product., state equivalence.
Mathematics Subject Classification: Primary: 53Bxx, 93Cxx; Secondary: 37Jx.
Citation: Sandra Ricardo, Witold Respondek. When is a control system mechanical?. Journal of Geometric Mechanics, 2010, 2 (3) : 265-302. doi: 10.3934/jgm.2010.2.265
R. Abraham and J. E. Marsden, "Foundations of Mechanics,", Addison-Wesley, (1978). Google Scholar
A. A. Agrachev, Feedback-invariant optimal control theory and differential geometry. II. Jacobi curves for singular extremals,, J. Dynam. Control Systems, 4 (1998), 583. doi: 10.1023/A:1021871218615. Google Scholar
A. A. Agrachev and R. V. Gamkrelidze, Feedback-invariant optimal control theory and differential geometry. I. Regular extremals,, J. Dynam. Control Systems, 3 (1997), 343. doi: 10.1007/BF02463256. Google Scholar
A. A. Agrachev and Y. L. Sachkov, "Control Theory from the Geometric Viewpoint,", Springer-Verlag Berlin and Heidelberg, (2004). Google Scholar
I. Anderson and G. Thompson, The inverse problem of the calculus of variations for ordinary differential equations,, Mem. Amer. Math. Soc., 98 (1992), 108. Google Scholar
H. Arai, K. Tanie and N. Shiroma, Nonholonomic control of a three-DOF planar underactuated manipulator,, IEEE Trans. Robot. Autom., 14 (1998), 681. doi: 10.1109/70.720345. Google Scholar
A. M. Bloch, "Nonholonomics Mechanics and Control,", Springer-Verlag, (2003). doi: 10.1007/b97376. Google Scholar
B. Bonnard, Feedback equivalence for nonlinear systems and the time optimal control problem,, SIAM J. Control and Optim., 29 (1991), 1300. doi: 10.1137/0329067. Google Scholar
W. Boothby, "An Introduction to Differential Manifolds and Riemannian Geometry,", 2nd edition, (1986). Google Scholar
F. Bullo and A. D. Lewis, "Geometric Control of Mechanical Systems,", Springer Verlag, (2004). Google Scholar
F. Bullo and K. M. Lynch, Kinematic controllability for decoupled trajectory planning in underactuated mechanical systems,, IEEE Trans. Robot. Autom., 17 (2001), 402. doi: 10.1109/70.954753. Google Scholar
D. Cheng, A. Astolfi and R. Ortega, On feedback equivalence to port controlled Hamiltonian systems,, Systems Control Lett., 54 (2005), 911. doi: 10.1016/j.sysconle.2005.02.005. Google Scholar
J. Cortés, A. J. van der Schaft and P. E. Crouch, Characterization of gradient control systems,, SIAM J. Control Optim., 44 (2005), 1192. doi: 10.1137/S0363012903425568. Google Scholar
M. Crampin, G. E. Prince and G. Thompson, A geometrical version of the Helmholtz conditions in time-dependent Lagrangian dynamics,, J. Phys. A-Math. Gen., 17 (1984), 1437. doi: 10.1088/0305-4470/17/7/011. Google Scholar
P. E. Crouch and A. J. van der Schaft, Hamiltonian and self-adjoint control systems,, Systems & Control Letters, 8 (1987), 289. doi: 10.1016/0167-6911(87)90093-4. Google Scholar
P. E. Crouch and A. J. van der Schaft, "Variational and Hamiltonian Control Systems,", Lectures Notes in Control and Inform. Sci. \textbf{101}, 101 (1987). Google Scholar
J. Douglas, Solution of the inverse problem of the calculus of variations,, Trans. Amer. Math. Soc., 50 (1941), 71. Google Scholar
R. B. Gardner, "The Method of Equivalence and its Applications,", CBMS Regional Conference Series in Applied Mathematics, 58 (1989). Google Scholar
R. B. Gardner and W. F. Shadwick, The GS algorithm for exact linearization to Brunovský normal form,, IEEE Trans. Automat. Control, 37 (1992), 224. doi: 10.1109/9.121623. Google Scholar
R. B. Gardner, W. F. Shadwick and G. R. Wilkens, Feedback equivalence and symmetries of Brunovský normal forms,, Contemp. Math., 97 (1989), 115. Google Scholar
J. Hauser, S. Sastry and G. Meyer, Nonlinear control design for slightly non-minimum phase systems: Application to V/STOL aircraft,, Automatica J. IFAC, 28 (1992), 665. doi: 10.1016/0005-1098(92)90029-F. Google Scholar
A. Isidori, "Nonlinear Control Systems,", 3rd edition, (1995). Google Scholar
B. Jakubczyk, Equivalence and invariants of nonlinear control systems,, in, (1990), 177. Google Scholar
B. Jakubczyk, Critical Hamiltonians and feedback invariants,, in, (1998), 219. Google Scholar
B. Jakubczyk, Feedback invariants and critical trajectories; Hamiltonian formalism for feedback equivalence,, in, 1 (2000), 545. Google Scholar
V. Jurdjevic, "Geometric Control Theory,", Cambridge University Press, (1997). Google Scholar
W. Kang and A. J. Krener, Extended quadratic controller normal form and dynamic feedback linearization of nonlinear systems,, SIAM J. Control Optim., 30 (1992), 1319. doi: 10.1137/0330070. Google Scholar
J. Koiller, Book review of "Analytical Mechanics: A comprehensive treatise on the dynamics of constrained systems for engineers, physicists and mathematicians," by John G. Papastavridis,, Bulletin (New Series) of the American Mathematical Society, 40 (2003), 405. Google Scholar
P. Kokkonen, "Energy-Shaping Control of Physical Systems (ESC),", Matematiikan Ja Tilastotieteen Laitos, (2007). Google Scholar
A. D. Lewis, Affine connections and distributions with applications to nonholonomic mechanics,, Rep. Math. Phys., 42 (1998), 135. doi: 10.1016/S0034-4877(98)80008-6. Google Scholar
A. D. Lewis, Affine connections control systems,, in, (2000), 128. Google Scholar
A. D. Lewis, The category of affine connection control systems,, in, (2000), 1260. Google Scholar
A. D. Lewis and R. M. Murray, Configuration Controllability of Simple Mechanical Control Systems,, SIAM J. Control Optim., 35 (1997), 766. doi: 10.1137/S0363012995287155. Google Scholar
A. D. Lewis and R. M. Murray, Decompositions for control systems on manifolds with an affine connection,, Syst. Contr. Lett., 31 (1997), 199. doi: 10.1016/S0167-6911(97)00040-6. Google Scholar
J. E. Marsden and T. Ratiu, "Introduction to Mechanics and Symmetry,", Springer-Verlag, (1994). Google Scholar
P. Martin, S. Devasia and B. Paden, A different look at output tracking: control of a VTOL aircraft,, in, (1994), 2376. Google Scholar
E. Martínez, J. F. Cariñena and W. Sarlet, A geometric characterization of separable second-order differential equations,, Mathematical Proceedings of the Cambridge Philosophical Society, 113 (1993), 205. doi: 10.1017/S0305004100075897. Google Scholar
M. Milam and R. M. Murray, A testbed for nonlinear flight control techniques: The Caltech ducted fan,, in, 1 (1999), 345. Google Scholar
G. Morandi, C. Ferrario, G. Lo Vecchio, G. Marmo and C. Rubano, The inverse problem in the calculus of variations and the geometry of the tangent bundle,, Physics Reports, 188 (1990), 147. doi: 10.1016/0370-1573(90)90137-Q. Google Scholar
R. M. Murray, Nonlinear control of mechanical systems: A Lagrangian perspective,, Annual Reviews in Control, 21 (1997), 31. doi: 10.1016/S1367-5788(97)00023-0. Google Scholar
R. M. Murray, Z. Li and S. S. Sastry, "A Mathematical Introduction to Robotic Manipulation,", Taylor & Francis Ltd, (1994). Google Scholar
H. Nijmeijer and A. J. van der Schaft, "Nonlinear Dynamical Control Systems,", Springer-Verlag, (1990). Google Scholar
R. Olfati-Saber, Global configuration stabilization for the VTOL aircraft with strong input coupling,, IEEE Trans. Automat. Control, 47 (2002), 1949. doi: 10.1109/TAC.2002.804457. Google Scholar
W. M. Oliva, "Geometric Mechanics,", Springer-Verlag, (2002). Google Scholar
R. Ortega, A. Loria, P. J. Nicklasson and H. Sira-Ramirez, "Passivity-Based Control of Euler-Lagrange Systems: Mechanical, Electrical and Electromechanical Applications,", Springer-Verlag, (1998). Google Scholar
R. H. Rand and D. V. Ramani, Nonlinear normal modes in a system with nonholonomic constraints,, Nonlinear Dynamics, 25 (2001), 49. doi: 10.1023/A:1012946515772. Google Scholar
W. Respondek, Feedback classification of nonlinear control systems in $\mathbbR^2$ and $\mathbbR^3$,, in, 207 (1998), 347. Google Scholar
W. Respondek, Introduction to geometric nonlinear control; linearization, observability and decoupling,, in, (2002), 169. Google Scholar
W. Respondek and S. Ricardo, Equivariants of mechanical control systems,, submitted, (2010). Google Scholar
W. Respondek and I. A. Tall, Feedback equivalence of nonlinear control systems: A survey on formal approach,, in, (2006), 137. Google Scholar
W. Respondek and M. Zhitomirskii, Feedback classification of nonlinear control systems on 3-manifolds,, Math. Control Signals Systems, 8 (1995), 299. doi: 10.1007/BF01209688. Google Scholar
S. Ricardo and W. Respondek, Geometry of second-order nonholonomic chained form systems,, submitted, (2010). Google Scholar
W. Sarlet, The Helmholtz conditions revisited. A new approach to the inverse problem of Lagrangian dynamics,, J. Phys. A-Math. Theor., 15 (1982), 1503. doi: 10.1088/0305-4470/15/5/013. Google Scholar
W. Sarlet, Geometrical structures related to second-order equations,, Differential Geometry and Its Applications, (1987), 279. Google Scholar
S. Sastry, "Nonlinear Systems: Analysis, Stability, and Control,", Springer-Verlag, (1999). Google Scholar
E. D. Sontag, "Mathematical Control Theory: Deterministic Finite Dimensional Systems,", Springer-Verlag, (1998). Google Scholar
M. W. Spong, Underactuated mechanical systems,, in, 230 (1998), 135. Google Scholar
P. Tabuada and G. Pappas, From nonlinear to Hamiltonian via feedback,, IEEE Trans. Automat. Control, 48 (2003), 1439. doi: 10.1109/TAC.2003.815040. Google Scholar
A. J. van der Schaft, Symmetries, conservation laws and time-reversibility for Hamiltonian systems with external forces,, J. Math. Phys., 24 (1983), 2095. doi: 10.1063/1.525962. Google Scholar
J. Vankerschaver, F. Cantrijn, M. de León and D. Martín de Diego, Geometric aspects of nonholonomic field theories,, Rep. Math. Phys., 56 (2005), 387. doi: 10.1016/S0034-4877(05)80093-X. Google Scholar
M. Zhitomirskii and W. Respondek, Simple germs of corank one affine distributions,, Banach Center Publications, 44 (1998), 269. Google Scholar
Dominique Chapelle, Philippe Moireau, Patrick Le Tallec. Robust filtering for joint state-parameter estimation in distributed mechanical systems. Discrete & Continuous Dynamical Systems - A, 2009, 23 (1&2) : 65-84. doi: 10.3934/dcds.2009.23.65
Kathrin Flasskamp, Sebastian Hage-Packhäuser, Sina Ober-Blöbaum. Symmetry exploiting control of hybrid mechanical systems. Journal of Computational Dynamics, 2015, 2 (1) : 25-50. doi: 10.3934/jcd.2015.2.25
Leonardo Colombo, David Martín de Diego. Optimal control of underactuated mechanical systems with symmetries. Conference Publications, 2013, 2013 (special) : 149-158. doi: 10.3934/proc.2013.2013.149
Leonardo Colombo, Fernando Jiménez, David Martín de Diego. Variational integrators for mechanical control systems with symmetries. Journal of Computational Dynamics, 2015, 2 (2) : 193-225. doi: 10.3934/jcd.2015003
Anthony M. Bloch, Rohit Gupta, Ilya V. Kolmanovsky. Neighboring extremal optimal control for mechanical systems on Riemannian manifolds. Journal of Geometric Mechanics, 2016, 8 (3) : 257-272. doi: 10.3934/jgm.2016007
Cédric M. Campos, Sina Ober-Blöbaum, Emmanuel Trélat. High order variational integrators in the optimal control of mechanical systems. Discrete & Continuous Dynamical Systems - A, 2015, 35 (9) : 4193-4223. doi: 10.3934/dcds.2015.35.4193
Firdaus E. Udwadia, Thanapat Wanichanon. On general nonlinear constrained mechanical systems. Numerical Algebra, Control & Optimization, 2013, 3 (3) : 425-443. doi: 10.3934/naco.2013.3.425
Leo T. Butler. A note on integrable mechanical systems on surfaces. Discrete & Continuous Dynamical Systems - A, 2014, 34 (5) : 1873-1878. doi: 10.3934/dcds.2014.34.1873
Anthony M. Bloch, Peter E. Crouch, Nikolaj Nordkvist. Continuous and discrete embedded optimal control problems and their application to the analysis of Clebsch optimal control problems and mechanical systems. Journal of Geometric Mechanics, 2013, 5 (1) : 1-38. doi: 10.3934/jgm.2013.5.1
B. Kaymakcalan, R. Mert, A. Zafer. Asymptotic equivalence of dynamic systems on time scales. Conference Publications, 2007, 2007 (Special) : 558-567. doi: 10.3934/proc.2007.2007.558
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of discrete mechanical systems by stages. Journal of Geometric Mechanics, 2016, 8 (1) : 35-70. doi: 10.3934/jgm.2016.8.35
Javier Fernández, Cora Tori, Marcela Zuccalli. Lagrangian reduction of nonholonomic discrete mechanical systems. Journal of Geometric Mechanics, 2010, 2 (1) : 69-111. doi: 10.3934/jgm.2010.2.69
Manuel Falconi, E. A. Lacomba, C. Vidal. The flow of classical mechanical cubic potential systems. Discrete & Continuous Dynamical Systems - A, 2004, 11 (4) : 827-842. doi: 10.3934/dcds.2004.11.827
Franco Cardin, Alberto Lovison. Finite mechanical proxies for a class of reducible continuum systems. Networks & Heterogeneous Media, 2014, 9 (3) : 417-432. doi: 10.3934/nhm.2014.9.417
Anthony M. Bloch, Melvin Leok, Jerrold E. Marsden, Dmitry V. Zenkov. Controlled Lagrangians and stabilization of discrete mechanical systems. Discrete & Continuous Dynamical Systems - S, 2010, 3 (1) : 19-36. doi: 10.3934/dcdss.2010.3.19
Viviana Alejandra Díaz, David Martín de Diego. Generalized variational calculus for continuous and discrete mechanical systems. Journal of Geometric Mechanics, 2018, 10 (4) : 373-410. doi: 10.3934/jgm.2018014
Mikhail Gusev. On reachability analysis for nonlinear control systems with state constraints. Conference Publications, 2015, 2015 (special) : 579-587. doi: 10.3934/proc.2015.0579
Ricardo Miranda Martins. Formal equivalence between normal forms of reversible and hamiltonian dynamical systems. Communications on Pure & Applied Analysis, 2014, 13 (2) : 703-713. doi: 10.3934/cpaa.2014.13.703
J. Gwinner. On differential variational inequalities and projected dynamical systems - equivalence and a stability result. Conference Publications, 2007, 2007 (Special) : 467-476. doi: 10.3934/proc.2007.2007.467
Keonhee Lee, Kazuhiro Sakai. Various shadowing properties and their equivalence. Discrete & Continuous Dynamical Systems - A, 2005, 13 (2) : 533-540. doi: 10.3934/dcds.2005.13.533
Sandra Ricardo Witold Respondek
|
CommonCrawl
|
Effects of number of training generations on genomic prediction for various traits in a layer chicken population
Ziqing Weng1,
Anna Wolc1,2,
Xia Shen3,4,
Rohan L. Fernando1,
Jack C. M. Dekkers1,
Jesus Arango2,
Petek Settar2,
Janet E. Fulton2,
Neil P. O'Sullivan2 &
Dorian J. Garrick1,5
Genomic estimated breeding values (GEBV) based on single nucleotide polymorphism (SNP) genotypes are widely used in animal improvement programs. It is typically assumed that the larger the number of animals is in the training set, the higher is the prediction accuracy of GEBV. The aim of this study was to quantify genomic prediction accuracy depending on the number of ancestral generations included in the training set, and to determine the optimal number of training generations for different traits in an elite layer breeding line.
Phenotypic records for 16 traits on 17,793 birds were used. All parents and some selection candidates from nine non-overlapping generations were genotyped for 23,098 segregating SNPs. An animal model with pedigree relationships (PBLUP) and the BayesB genomic prediction model were applied to predict EBV or GEBV at each validation generation (progeny of the most recent training generation) based on varying numbers of immediately preceding ancestral generations. Prediction accuracy of EBV or GEBV was assessed as the correlation between EBV and phenotypes adjusted for fixed effects, divided by the square root of trait heritability. The optimal number of training generations that resulted in the greatest prediction accuracy of GEBV was determined for each trait. The relationship between optimal number of training generations and heritability was investigated.
On average, accuracies were higher with the BayesB model than with PBLUP. Prediction accuracies of GEBV increased as the number of closely-related ancestral generations included in the training set increased, but reached an asymptote or slightly decreased when distant ancestral generations were used in the training set. The optimal number of training generations was 4 or more for high heritability traits but less than that for low heritability traits. For less heritable traits, limiting the training datasets to individuals closely related to the validation population resulted in the best predictions.
The effect of adding distant ancestral generations in the training set on prediction accuracy differed between traits and the optimal number of necessary training generations is associated with the heritability of traits.
Genomic prediction in domestic animals is rapidly becoming the preferred method to evaluate individual genetic merit with advances in technology for massively parallel genotyping of SNPs (single nucleotide polymorphisms). Genomic selection is considered a promising approach, since it can yield higher rates of genetic gain and lower rates of inbreeding per generation than pedigree-based best linear unbiased prediction (PBLUP) [1, 2], which is the traditional approach for calculating estimated breeding values (EBV) based on phenotype and pedigree information [3]. Simulated and real data analyses have shown that accuracies of both genomic prediction and PBLUP can be influenced by the heritability of the trait, the nature of the fixed effects, and the extent of additive genetic relationships between phenotyped individuals and selection candidates [4]. Genomic prediction accuracies are affected by marker density [5], number of animals in the training population [6, 7], size and number of quantitative trait loci (QTL) [8, 9], and amount of linkage disequilibrium (LD) or linkage between markers and QTL [10]. Collectively, the latter two factors characterize the genomic architecture of the trait.
Based on simplistic theory, the larger is the number of animals used in training, the greater is the expected accuracy of genomic prediction [6, 7]. Inclusion of data on animals from past generations will increase the size of the training data set. As briefly described below, another reason for using data from all past generations is to avoid selection bias [11, 12]. Under random mating, the joint distribution between phenotypic and breeding values can be specified using the theory of covariance between relatives. This joint distribution is used to predict breeding values from phenotypes. In a population that is under selection, this joint distribution is altered in a way that depends on the type and intensity of selection and thus, prediction of breeding values becomes difficult. However, when inference is based on conditional distributions and conditioning is on data that includes all the information used for selection, it has been shown that the selection process can be ignored [12–14]. Pedigree-based additive genetic covariance between a candidate and its direct ancestor is halved by each additional generation. Thus, in PBLUP, under random mating, data from distant generations contribute little to the accuracy of prediction. In a simulated population under selection, it has been shown that using the data from the last two generations compared to that of the full pedigree resulted in the same response to selection [15]. This should be examined in a real population under selection. In contrast to PBLUP, in genomic BLUP (GBLUP), given the high LD between markers and QTL, even distant generations are expected to contribute to prediction accuracy [16]. Lourenco et al. [17] evaluated the benefit of past generations on the accuracy of GEBV using single-step GBLUP, where the genomic relationship matrix was blended with the pedigree-based relationship matrix. Using one set of individuals for validation, they found a small effect of pedigree depth on the accuracy of GEBV [17].
The objective of our study was to examine the effect of including successive generations in the training dataset on accuracy of genomic prediction across different validation sets and to assess the optimal number of training generations for routinely recorded traits. Using data from an elite line of layer chickens, genomic predictions were obtained by using the BayesB genomic prediction method [5] and PBLUP, and the resulting predictions were compared.
Phenotypes and genotypes
Data included phenotypic records for 17,793 birds from an experimental brown-egg laying population, representing 11 generations that hatched between 2002 and 2011. Among those, 5108 birds (including all parents used for breeding) from the most recent nine generations (from G3 to G11) were genotyped with a custom 40 K SNP panel (Illumina, San Diego, CA). Only genotyped females (~2260) with their own phenotypic records were used in the prediction analyses. A total of 23,098 segregating SNPs across 28 chromosomes remained after removing SNPs with a call rate lower than 0.95 (1121 SNPs), a minor allele frequency lower than 0.025 (10,770 SNPs), or a Mendelian inconsistency rate between parent-offspring higher than 0.05 (1467 SNPs). The following 16 traits were analyzed: early and late albumen height (eAH, lAH, mm), shell color of the first three eggs (eC3, index units), weight of the first three eggs (eE3, g), early and late egg color (eCO, lCO, index units), early and late average egg weight (eEW, lEW, g), early and late egg production rate (ePD, lPD), early and late shell puncture score (ePS, lPS, g/s), early and late yolk weight (eYW, lYW, g), body weight (lBW, kg) and age at sexual maturity (eSM, d). Measurements of early and late traits were taken at 26–28 and 42–46 weeks, respectively, except for eC3 and eE3, which were measured when hens reached sexual maturity. In total, there were 136,243 and 45,242 phenotypic records for early and late traits, respectively. The pedigree-based heritability (narrow-sense heritability h 2) for each trait was estimated by using a single-trait animal model fitted in ASREML [18] for all phenotyped animals. In this selection program, genomic information was used since 2009 (G7, generation 7), after many generations of conventional multiple-trait selection based on an index of EBV [19]. Three hundred and sixty females and 120 males (out of ~2000 birds) were selected per generation during conventional selection, whereas when genomic selection started, 50 animals of each sex (out of ~600 birds) were selected from G7 to G11. The basic description of the collected phenotypic records is in Table 1.
Table 1 Summary statistics of the phenotypes available for 16 traitsa in each generation (G)
The following two single-trait models were used to predict EBV or GEBV:
PBLUP: a single-trait animal model using pedigree relationships and all available phenotype records was fitted using ASREML3.0 [18]. The model equation was:
$$ {\mathbf{y}} = {\mathbf{X{\varvec\upbeta} }} + {\mathbf{Za}} + {\mathbf{e}}, $$
where y is the vector of phenotypes for each trait in the training set, \( {\varvec{\upbeta}} \) represents the vector of fixed class effects (hatch within generation), \( {\mathbf{a}}\varvec{ } \) is the vector of animal breeding values with \( Var\left( {\mathbf{a}} \right) = {\mathbf{A}}\sigma_{a}^{2} , \) where \( {\mathbf{A}} \) is the pedigree relationship matrix and \( \sigma_{a}^{2} \) is the additive genetic variance estimated using ASREML, \( {\mathbf{X}} \) and \( {\mathbf{Z}} \) are design matrices, and \( {\mathbf{e}} \) is the vector of residual effects with \( Var\left( {\mathbf{e}} \right) = {\mathbf{I}}\sigma_{e}^{2} \), where \( \sigma_{e}^{2} \) is the residual variance estimated using ASREML. In the pedigree-based analyses, the relationship matrix was calculated from either the full pedigree including all animals from 11 generations, or from truncated pedigrees that only included ancestors that were born within two generations prior to the training set. By solving the following mixed model equation [11], the EBV of individuals in the validation population, whose phenotypes were masked, were obtained:
$$ \left[ {\begin{array}{*{20}c} {{\mathbf{X}}^{{\prime }} {\mathbf{X}}} &\quad {{\mathbf{X}}^{{\prime }} {\mathbf{Z}}} \\ {{\mathbf{Z}}^{{\prime }} {\mathbf{X}}} &\quad {{\mathbf{Z}}^{{\prime }} {\mathbf{Z}} + {\mathbf{A}}^{ - 1} \lambda } \\ \end{array} } \right]\left[ {\begin{array}{*{20}c} {{\hat{\varvec{\upbeta }}}} \\ {{\hat{\mathbf{a}}}} \\ \end{array} } \right] = \left[ {\begin{array}{*{20}c} {{\mathbf{X}}^{{\prime }} {\mathbf{y}}} \\ {{\mathbf{Z}}^{{\prime }} {\mathbf{y}}} \\ \end{array} } \right], $$
where \( \lambda = \sigma_{e}^{2} /\sigma_{a}^{2} \), \( {\hat{\varvec{\upbeta }}} \) is the vector of estimates of fixed class effects, and \( {\hat{\mathbf{a}}} \) is the vector of EBV of animals included in the full or truncated pedigrees.
Genomic prediction model BayesB [5, 20] was applied using only records on genotyped individuals that had their own phenotypic records (i.e. only females) and was performed using the GenSel4.4 software [20, 21]. Method BayesB assumes that a fraction \( \pi \) of SNPs have zero effects and 1-\( \pi \) SNP effects have a univariate-t distribution with a mean of 0, \( v_{a} \) degrees of freedom, and a scale parameter \( S_{a}^{2} \). This prior assumption of SNP effects is equivalent to assuming that each SNP effect has a univariate normal distribution with a mean of 0 and a SNP-specific variance [22]. Each SNP-specific variance has a scaled inverse Chi square prior distribution with \( v_{j} \) = 4.2 degrees of freedom and a scale parameter \( S_{j}^{2} \) derived from \( \frac{{\tilde{\sigma }_{j}^{2} \left( {v_{j} - 2} \right)}}{{v_{j} }} \), where \( \tilde{\sigma }_{j}^{2} \) is the variance of the additive effect for a randomly sampled SNP calculated as \( \frac{{\tilde{\sigma }_{s}^{2} }}{{\left( {1 - \pi } \right)\mathop \sum \nolimits_{j = 1}^{k} 2p_{j} \left( {1 - p_{j} } \right)}} \), where \( \tilde{\sigma }_{s}^{2} \) is the additive genetic variance explained by SNPs, and \( p_{j} \) is the allele frequency of SNP \( j \) [22]. The priors for the genetic and residual variances for each trait were obtained from the single-trait pedigree-based ASREML analyses. Markov chain Monte Carlo (MCMC) sampling with 55,000 iterations, of which the first 5000 were discarded as burn-in, was used to estimate the posterior means of SNP effects. The convergence of MCMC samples for genetic variance, residual variance, and marker heritability were assessed by using the Heidelberger and Welch test [23] in R/coda package [24]. The model equation used for BayesB is:
$$ y_{im} = \beta_{m} + \mathop \sum \limits_{j = 1}^{k} z_{ij} u_{j} + e_{i} , $$
where \( y_{im} \) is the phenotype for genotyped individual i in the training set in hatch within generation class m, \( \beta_{m} \) is the effect of hatch within generation class m, k is the number of SNPs, \( z_{ij} \) is the allele at SNP j in genotyped individual i coded 0, 1 and 2, \( u_{j} \) is the random effect of SNP j distributed as \( u_{j} \;\sim\;N\left( {0,\sigma_{j}^{2} } \right) \) with probability \( 1 - \pi \), and 0 otherwise, where \( \sigma_{j}^{2} \) is the variance of the additive effect for SNP j, and \( e_{i} \) is the residual effect distributed as \( e_{i} \;\sim\;N\left( {0,\sigma_{e}^{2} } \right) \), where \( \sigma_{e}^{2} \) is the residual variance. The assumed value of \( \pi \) was 0.95. The GEBV of individual i (\( GEBV_{i} \)) in the validation population was derived as:
$$ GEBV_{i} = \mathop \sum \limits_{j = 1}^{k} z_{ij} \hat{u}_{j} , $$
where \( z_{ij} \) is the allele at SNP j of the genotyped individual i, and \( \hat{u}_{j} \) is the posterior mean of the substitution effect of SNP j estimated over 50,000 post burn-in samples.
The effect of using different training generations, including animals with phenotypes and genotypes (~300 per generation), was assessed for generations G5–G11. The training sets consisted of animals from successive ancestral generations immediately prior to the validation generation. Additional file 1: Table S1 uses an example to illustrate the assignment of validation and training sets. Different validation sets (from G5 to G11) with different numbers of training generations were assessed. If only G11 was used for validation, spurious environmental effects, such as heat stress in a particular year, would be confounded with the distance between the training and validation generations, which could bias results. Thus, different validation generations were used to avoid this confounding. The maximum numbers of training generations for pedigree-based and marker-based analyses were 10 and 8, respectively. The numbers of phenotypic records within each generation are in Table 1. Additional file 1: Table S2 gives the average number of available genotyped individuals with early and late traits for each generation. Predictive performance of each model was evaluated by prediction accuracy, which was determined in the validation generation based on the correlation between EBV and phenotypes adjusted for fixed effects, standardized by dividing by the square root of trait heritability [25, 26].
In order to separate the impact of size of the training data set and number of training generations on prediction accuracy of GEBV, additional training scenarios were considered for one of the analyzed traits as an example (eEW) using the BayesB model (Table 2). In that analysis, G10 was used as the validation set and different numbers of genotyped animals (125 or 250) were randomly sampled from one to six training generations (G4–G9). The training scenarios differed in total number of animals and number of generations that contributed to the training set. Some scenarios had the same size of training set but differed in the number of generations that contributed to the training set. For example, scenarios 1 and 5 had 250 genotyped animals in the training set, but in scenario 1, all these 250 animals were from G9, whereas in scenario 5, 125 animals were from G8 and the remaining 125 animals were from G9. Each scenario was repeated five times in order to avoid sample bias.
Table 2 Mean accuracy (±SD) of genomic predictions over 5 replicates obtained with different training setsa for eEWb
Optimal number of training generations
The optimal number of training generations to maximize prediction accuracy was derived for each trait and method as the maximum from a second-order polynomial regression fitted to all the prediction accuracies that were obtained for that method for that trait, using the following model:
$$ y_{ik} = a_{i} k^{2} + b_{i} k + c_{i} + e_{ik} , $$
where \( y_{ik} \) is the prediction accuracy of GEBV obtained using BayesB for trait \( i \) with \( k \) ancestral generations included in the training set, \( a_{i} \) and \( b_{i} \) are regression coefficients, \( c_{i} \) is the intercept, and \( e_{ik} \) is the residual. Significance of regression coefficients was tested for each trait. For all traits, except eCO, eC3, eSM, lAH, lYW, and lPS, the second-order polynomial regression coefficients were significantly (p < 0.01) different from zero. The optimal number of training generations was then derived as min \( \left( { - \frac{{\hat{b}_{i} }}{{2\hat{a}_{i} }},8} \right) \) because the dataset included at most eight generations.
Marker-based heritability
Marker-based heritability (\( h_{q}^{2} \)) was defined as the genetic variance explained by the markers divided by the total phenotypic variance. Genomic prediction method BayesC with π = 0 implemented in the GenSel4.4 software [20, 21] was used to estimate \( h_{q}^{2} \), which assumes that all the SNPs have non-zero effects, and each SNP effect is drawn from a normal distribution with a common variance. This BayesC0 model is equivalent to GBLUP [27], except that genetic and residual variances are treated as unknown with given priors, instead of being fixed in GBLUP. The priors for the genetic and residual variance components were obtained from the single-trait pedigree-based ASREML analysis for each trait. MCMC sampling with 55,000 iterations (discarding the first 5000 as burn-in) was used to make inference on \( h_{q}^{2} \).
Prediction accuracy in progeny
Differences between prediction methods
Figure 1 shows boxplots of the prediction accuracies of PBLUP and BayesB for different training generations. Prediction accuracies of PBLUP quickly reached a plateau as the number of training generations increased. The slight fluctuations in prediction accuracies of PBLUP might be due to genetic drift. Prediction accuracies of PBLUP using a truncated pedigree (PBLUP_T, including animals in the training and validating sets, and their relatives that were traced two generations back) were very similar compared to the full pedigree (PBLUP_F, including all animals from 11 generations) across validation generations. These results indicate that using a truncated or full pedigree to construct the pedigree-based relationship matrix has no significant effect on the accuracy of PBLUP in terms of ranking the current cohort of candidates in this population, which was under selection. Mehrabani-Yeganeh et al. [15] reported that using only the last two generations compared to the full pedigree resulted in the same response to selection in a simulated closed nucleus broiler line. Lourenco et al. [17] also found that depth of pedigree had a very small impact on the accuracy of PBLUB evaluations in US dairy cattle and pig data. They observed the same result for accuracies of GEBV using single-step GBLUP.
Prediction accuracies of EBV over different numbers of training generations across all traits and all validation sets using genomic prediction (BayesB) or pedigree-based BLUP with a truncated (PBLUP_T), or full pedigree (PBLUP_F). The full pedigree included all animals from 11 generations; the truncated pedigree included training and validation animals and their relatives traced two generations back. The bar within each box represents the median of prediction accuracies
In our data, the advantage of genomic evaluations using BayesB over the pedigree-based EBV was obvious (Fig. 1) and can explained by the fact that genomic prediction uses LD between markers and QTL, as well as pedigree relationships [16]. Prediction accuracies obtained from PBLUP reached a plateau much more quickly as the number of training generations increased than those obtained from BayesB, because pedigree-based relationships decay faster than genomic relationships [4, 16].
In this study, MCMC samples from BayesB for genetic variance, residual variance, and marker-based heritability had converged based on Heidelberger and Welch diagnostics. A fixed π (0.95) was used in the BayesB analyses for all traits. Although using π estimated with the Bayes Cπ method [22] may result in different prediction for some analyses, using a fixed π in the BayesB analysis is not expected to affect the comparison of results within a trait. The BayesB method used in this study uses only animals with known phenotypes and genotypes. In contrast, single-step GBLUP uses pedigree relationships to include phenotypes of non-genotyped individuals.
Differences between traits and training generations
In general, for the first few training generations, prediction accuracies of PBLUP and BayesB increased and then plateaued or dropped slightly when adding more distant ancestral generations (Fig. 2). The impact of adding ancestor generations in the training set on prediction accuracy of GEBV differed between traits. These differences might be caused by differences in heritabilities, genetic architecture, and the number of available genotypes or phenotypes. For some traits (e.g. eAH), prediction accuracy continued to increase as the number of training generations increased, while for other traits accuracies decreased slightly as the number of distant generations in training set increased (e.g. eEW).
Prediction accuracies of EBV across different validation sets using pedigree BLUP with ancestors traced back two generations (PBLUP_T) and genomic prediction over different numbers of training generations for each trait. The bar within each box represents the median of prediction accuracies
In this population, data from distant generations (more than four training generations back) contributed little to prediction accuracy of PBLUP. For most traits, distant ancestral generations continued to contribute to the accuracy of genomic prediction but their contributions were smaller than those of generations that were close to the validation generation. For the same population, Wolc et al. [28] reported that decreasing the genomic relationships between pairs of individuals when the pedigree relationship was less than 0.45, effectively reduced the impact of distant relatives, and increased prediction accuracy for egg production in laying hens when using GBLUP.
To avoid confounding between environmental effects (e.g. heat stress) that can cause animals to re-rank and that might be specific to a particular generation, different validation sets were used in this study. We observed fluctuations in prediction accuracies over training generations, which could be due to variation in environmental effects, distinct population structures, different genomic relationships between training and validation sets, genetic drift, or interactions between genotype and environment. For example, in Additional file 2: Figure S1, the prediction accuracies of eEW ranged from 0.39 to 0.69 for different combinations of training and validation sets that were all characterized by having four generations included in the training set.
In this study, the size of the validation set, number of generations, and density of the SNP panel were limited by available data. Further analyses are needed to validate the effect on genomic prediction accuracy of adding distant ancestral generations in the training set. A larger population could allow the impact of these factors to be characterized and to better identify the contribution of each ancestral generation.
Size and composition of training set
Table 2 presents the prediction accuracies for eEW for eight scenarios that differed in the total number of training animals and the number of generations that contributed to the training set. As expected, for the same number of training generations, prediction accuracies increased with the size of the training set [6, 7]. For example, when the number of training animals from the same generation increased from 125 (scenario 4) to 250 (scenario 1), prediction accuracy of GEBV for the validation animals (G10) increased from 0.23 to 0.46.
Although the numbers of animals in the training set were the same between scenarios 2 and 7, prediction accuracy was greater in scenario 2 than in scenario 7 (Table 2). This difference was more obvious when the size of the training set became larger (comparing scenarios 3 and 9). In scenario 3, all 750 training animals were from the three preceding generations, whereas in scenario 9, 50 % of the animals were from more distant generations. Individuals from closely-related generations can better predict GEBV of validation animals compared to animals from more distant generations [16, 28]. Similar phenomena were observed for the 15 other traits (See Additional file 1: Table S3), except for ePD and lYW, for which prediction accuracy actually decreased as more animals from ancestral generations were added in the training set.
The number of genotyped animals per generation is limited in livestock species. Although increasing the number of training generations is not equivalent to increasing the size of the training set, including data from successive ancestral generations is an alternative approach to enlarge the size of the training population. However, the impact of including such ancestral generations in the training set on genomic prediction accuracies can differ between traits.
Relationship between optimal number of training generations and heritability
Table 3 presents estimates of pedigree-based heritability and marker-based heritability for each trait. Marker-based heritabilities were smaller than pedigree-based heritabilities because markers did not capture all genetic variation.
Table 3 Estimates of pedigree-based and marker-based heritabilities (±SE) for the 16 traitsa from univariate animal models
Figure 3 shows the number of training generations that generated the highest accuracy of GEBV for each trait using BayesB. Traits were sorted by pedigree-based heritability estimates, from low (lPS) to high (eCO). The blue line in Fig. 3 shows the linear relationship between optimal training generation and pedigree-based heritability. The correlation between optimal number of training generations and pedigree-based heritability was equal to 0.65, whereas the correlation between optimal number of training generations and marker-based heritability was equal to 0.55. Additional file 2: Figure S1 shows in detail the regression of prediction accuracy on the number of training generations for each trait. In general, and somewhat surprisingly, the highly heritable traits had a larger optimal number of training generations than the lowly heritable traits.
Optimal number of training generations for genomic prediction for each trait. Traits were sorted by pedigree-based heritability estimates. The blue line is the regression of the optimal number of training generations on heritability
Estimates of optimal number of training generations may vary according to assumptions of the statistical model and/or the density and location of SNPs. For some traits, if assumptions of the statistical model are not valid, the model may not capture the effects of QTL, even if the size of the training population increases. In a simulation study, Sun [29] showed that modeling co-segregation can improve prediction accuracy when the LD between SNPs and QTL is low in a training population that consisted of multiple families and generations. In the case where a causal variant or QTL is included in the SNP panel, adding data from more distant generations in the training set is expected to increase the accuracy of genomic prediction until the prediction accuracy reaches a plateau. When QTL mutations are not on the SNP panel, a high-density panel is likely to achieve higher LD since some SNPs will be closer to the QTL than would be the case with a low-density panel. Thus, when the dataset is sufficiently large and genotyped with high-density panels, the accuracy of genomic prediction is not expected to decrease when distant generations are used for the training set.
Based on this study, for highly heritable traits, prediction accuracy of GEBV was highest when the number of generations in the training set was larger than 4. In contrast, for lowly heritable traits, it was better to include in the training dataset only the individuals that were the most closely-related to the validation individuals. We suggest two strategies that may be useful for populations with multi-trait selection programs: (1) changing the number of training generations for each trait analyzed; or (2) obtaining a weighted optimal number of training generations based on results for all traits in the breeding objective. The weight for each trait could be determined by its relative economic importance in the breeding program.
The effect of increasing the number of training generations on accuracy of genomic prediction differs between traits. The optimal number of training generations in genomic prediction is influenced by the heritability of a trait. For the data used in this study, traits with a lower heritability had a smaller optimal number of training generations than traits with a higher heritability. In practice, the optimal number of training generations to be used in a multi-trait selection population could be based on the importance of the traits in the breeding program.
Sonesson AK, Meuwissen THE. Testing strategies for genomic selection in aquaculture breeding programs. Genet Sel Evol. 2009;41:37.
Daetwyler HD, Villanueva B, Bijma P, Woolliams JA. Inbreeding in genome-wide selection. J Anim Breed Genet. 2007;124:369–76.
Henderson CR. Application of linear models in animal breeding. 3rd ed. Guelph: CGIL Publications; 1984.
Wolc A, Arango J, Settar P, Fulton JE, O'Sullivan NP, Preisinger R, et al. Persistence of accuracy of genomic estimated breeding values over generations in layer chickens. Genet Sel Evol. 2011;43:23.
Meuwissen TH, Hayes BJ, Goddard ME. Prediction of total genetic value using genome-wide dense marker maps. Genetics. 2001;157:1819–29.
Hayes BJ, Bowman PJ, Chamberlain AJ, Goddard ME. Invited review: genomic selection in dairy cattle: progress and challenges. J Dairy Sci. 2009;92:433–43.
Daetwyler HD, Villanueva B, Woolliams JA. Accuracy of predicting the genetic risk of disease using a genome-wide approach. PLoS One. 2008;3:e3395.
Daetwyler HD, Pong-Wong R, Villanueva B, Woolliams JA. The impact of genetic architecture on genome-wide evaluation methods. Genetics. 2010;185:1021–31.
Kizilkaya K, Fernando RL, Garrick DJ. Genomic prediction of simulated multibreed and purebred performance using observed fifty thousand single nucleotide polymorphism genotypes. J Anim Sci. 2010;88:544–51.
Habier D, Tetens J, Seefried FR, Lichtner P, Thaller G. The impact of genetic relationship information on genomic breeding values in German Holstein cattle. Genet Sel Evol. 2010;42:5.
Henderson CR. Best linear unbiased estimation and prediction under a selection model. Biometrics. 1975;31:423–47.
Im S, Fernando R, Gianola D. Likelihood inferences in animal breeding under selection: a missing-data theory view point. Genet Sel Evol. 1989;21:399–414.
Article PubMed Central Google Scholar
Fernando RL, Gianola D. Statistical inferences in populations undergoing selection or non-random mating. In: Gianola D, Hammond K, editors. Advances in statistical methods for genetic improvement of livestock. Berlin: Springer; 1990. p. 437–53.
Sorensen D, Fernando R, Gianola D. Inferring the trajectory of genetic variance in the course of artificial selection. Genet Res. 2001;77:83–94.
Mehrabani-Yeganeh H, Gibson JP, Schaeffer LR. Using recent versus complete pedigree data in genetic evaluation of a closed nucleus broiler line. Poult Sci. 1999;78:937–41.
Habier D, Fernando RL, Dekkers JCM. The impact of genetic relationship information on genome-assisted breeding values. Genetics. 2007;177:2389–97.
Lourenco DAL, Misztal I, Tsuruta S, Aguilar I, Lawlor TJ, Forni S, et al. Are evaluations on young genotyped animals benefiting from the past generations? J Dairy Sci. 2014;97:3930–42.
Gilmour AR, Gogel BJ, Cullis BR, Thompson R. ASReml user guide. Hemel Hempstead: VSN Int Ltd.; 2009.
Wolc A, Zhao H, Arango J, Settar P, Fulton JE, O'Sullivan NP, et al. Response and inbreeding from a genomic selection experiment in layer chickens. Genet Sel Evol. 2015;47:59.
Fernando RL, Garrick D. Bayesian methods applied to GWAS. Methods Mol Biol. 2013;1019:237–74.
Garrick DJ, Fernando RL. Implementing a QTL detection study (GWAS) using genomic prediction methodology. Methods Mol Biol. 2013;1019:275–98.
Habier D, Fernando RL, Kizilkaya K, Garrick DJ. Extension of the bayesian alphabet for genomic selection. BMC Bioinform. 2011;12:186.
Heidelberger P, Welch PD. Simulation run length control in the presence of an initial transient. Oper Res. 1983;31:1109–44.
Plummer M, Best N, Cowles K, Vines K. CODA. Convergence diagnosis and output analysis for MCMC. R News. 2006;6:7–11.
Wolc A, Stricker C, Arango J, Settar P, Fulton JE, O'Sullivan NP, et al. Breeding value prediction for production traits in layer chickens using pedigree or genomic relationships in a reduced animal model. Genet Sel Evol. 2011;43:5.
Legarra A, Robert-Granié C, Manfredi E, Elsen JM. Performance of genomic selection in mice. Genetics. 2008;180:611–8.
VanRaden PM. Efficient methods to compute genomic predictions. J Dairy Sci. 2008;91:4414–23.
Wolc A, Arango J, Settar P, Fulton JE, O'Sullivan NP, Preisinger R, et al. Application of a weighted genomic relationship matrix to breeding value prediction for egg production in laying hens. In: Proceedings of the international plant and animal genome XXI, 11–16 January 2013. San Diego; 2013.
Sun X. Genomic prediction using linkage disequilibrium and co-segregation. ProQuest Diss Publ 2014:3684339, Iowa State University; 2014.
ZW undertook the analysis and wrote the draft. JA, PS, JF and NPO conducted the experiment and collected the data. ZW, AW, XS, DJG, RLF, and JCMD conceived the study and contributed to the methods. All authors read and approved the final manuscript.
This study was supported by Hy-Line Int., the EW group, and Agriculture and Food Research Initiative competitive Grants 2009-35205-05100 and 2010-65205-20341 from the USDA National Institute of Food and Agriculture Animal Genome Program. XS was funded by a grant from the Swedish Research Council (2014-371).
Ziqing Weng, Anna Wolc, Rohan L. Fernando, Jack C. M. Dekkers & Dorian J. Garrick
Hy-Line International, Dallas Center, IA, 50063, USA
Anna Wolc, Jesus Arango, Petek Settar, Janet E. Fulton & Neil P. O'Sullivan
Department of Medical Epidemiology and Biostatistics, Karolinska Institute, Stockholm, Sweden
Xia Shen
MRC Human Genetics Unit, MRC Institute of Genetics and Molecular Medicine, University of Edinburgh, Edinburgh, UK
Institute of Veterinary, Animal and Biomedical Sciences, Massey University, Palmerston North, New Zealand
Dorian J. Garrick
Ziqing Weng
Anna Wolc
Rohan L. Fernando
Jack C. M. Dekkers
Jesus Arango
Petek Settar
Janet E. Fulton
Neil P. O'Sullivan
Correspondence to Dorian J. Garrick.
Examples of experimental design for training (T) and validation (V) sets. Table S2. Description of the average number of individuals with own phenotypes and genotypes in each generation for early and late traits. Table S3. Mean accuracies (± SD) of genomic predictions over 5 replicates for different training sets and for the studied traits. In this analysis, G10 was used as the validation generation and training individuals were randomly sampled from G4 to G9. (1) eCO (early egg color); (2) eC3 (early color of first 3 eggs; (3) eE3 (early weight of first 3 eggs); (4) eSM (early age at sexual maturity); (5) eAH (early albumen height); (6) eYW (early yolk weight); (7) ePD (early egg production rate); (8) ePS (early egg puncture score); (9) lCO (late egge color); (10) lEW (late average weight of 3-5 eggs); (11) lBW (late body weight); (12) lAH (late albumen height); (13) lYW (late yolk weight); (14) lPD (late egg production rate); (15) lPS (late egg puncture score).
Scatter plot of accuracies of genomic predictions across different validation sets over training generations for each trait. The blue line is the regression of the accuracy on the number of training generations. The red line indicates the optimal number of training generations. The R-squared of regression line is presented as r2.
Weng, Z., Wolc, A., Shen, X. et al. Effects of number of training generations on genomic prediction for various traits in a layer chicken population. Genet Sel Evol 48, 22 (2016). https://doi.org/10.1186/s12711-016-0198-9
Quantitative Trait Locus
Estimate Breeding Value
Training Generation
|
CommonCrawl
|
Computer simulations of the signalling network in FLT3 +-acute myeloid leukaemia – indications for an optimal dosage of inhibitors against FLT3 and CDK6
Antoine Buetti-Dinh ORCID: orcid.org/0000-0002-6469-02961,2,3,4,5 &
Ran Friedman1,2
Mutations in the FMS-like tyrosine kinase 3 (FLT3) are associated with uncontrolled cellular functions that contribute to the development of acute myeloid leukaemia (AML). We performed computer simulations of the FLT3-dependent signalling network in order to study the pathways that are involved in AML development and resistance to targeted therapies.
Analysis of the simulations revealed the presence of alternative pathways through phosphoinositide 3 kinase (PI3K) and SH2-containing sequence proteins (SHC), that could overcome inhibition of FLT3. Inhibition of cyclin dependent kinase 6 (CDK6), a related molecular target, was also tested in the simulation but was not found to yield sufficient benefits alone.
The PI3K pathway provided a basis for resistance to treatments. Alternative signalling pathways could not, however, restore cancer growth signals (proliferation and loss of apoptosis) to the same levels as prior to treatment, which may explain why FLT3 resistance mutations are the most common resistance mechanism. Finally, sensitivity analysis suggested the existence of optimal doses of FLT3 and CDK6 inhibitors in terms of efficacy and toxicity.
Predictive modelling approaches are used frequently during modern drug development. These include molecular modelling and screening [1], QSAR [2, 3], chemoinformatics-based ligand identification [4, 5], prediction of ADMET [6] and other aspects such as crystal structures of drugs [7]. Another important aspect is that of drug resistance, which is common in infectious diseases [8, 9] and cancer [10]. Unfortunately, our understanding of drug resistance and the causes for it is limited, and predictive approaches are hard to come by.
Many membrane-bound receptor tyrosine kinases (RTKs) are important for regulation of cellular growth [11, 12]. Mutations that alter their activity thus lead to abnormal proliferation that is associated with the development of cancers [13]. FLT3 is an RTK, whose physiological role is to regulate haematopoiesis. Mutations in FLT3 are involved in AML (FLT3 +-AML) and, to a minor extent, in acute lymphoblastic leukaemia (ALL) as well [11]. This makes FLT3 a potential molecule drug target. Internal tandem duplications (ITD) in the juxtamembrane domain of FLT3 are common in FLT3-derived AML patients [14]. In addition, several mutations in the kinase activation domain cause sustained FLT3 activity that leads to uncontrolled proliferation and abates apoptosis. These include mutations in residues R834 [15], D835 [16], I836 [17], N841 [18] and Y842 [19] of the activation loop and rare mutations in the extracellular juxtamembrane domain [15]. Small molecules such as lestaurtinib, midostaurin, ponatinib, quizartinib, sorafenib, sunitinib and tandutinib can inhibit FLT3. Midostaurin has been recently approved by the US Food and Drug Administration (FDA) for the treatment of adult patients with newly diagnosed AML who are FLT3 mutation-positive. Ponatinib, sorafenib and sunitinib are approved for clinical use (for other conditions). Unfortunately, RTK inhibitors are often subject to drug resistance [20]. Known resistance mechanisms against midostaurin include FLT3-ITD overexpression, genetic (13q) alterations, upregulation of antiapoptotic genes, downregulation of proapoptotic genes, and FLT3 resistance mutations [21] including F621L, A627P, N676K, F691L, and Y842C [22, 23]. Alternative signalling can also provide cancers with treatment escape routes that bypass the signalling pathways blocked by therapeutic inhibitors. This resistance mechanisms is based on the fact that biological signalling is typically distributed over multiple components. It rarely relies on a single path that connects a receptor to its target, but rather involves multiple, converging, diverging and recursive branches of the signalling network. This provides means to cancers for boosting alternative signalling in order to compensate for pathways blocked by inhibitors, thereby promoting cancer-driving processes such as cellular proliferation or reduced apoptosis despite therapy [24, 25].
Experimental evidence indicates that FLT3 signalling induces a cascade of events that involves an intricate network of signalling components comprising CDK6, PI3K, STAT (signal transducer and activators of transcription), AKT (protein kinase B), BCL2-BAD (BCL2-family protein – BCL2 antagonist of cell death), RAS, MEK/ERK (mitogen-activated ERK kinase / extracellular signal-regulated kinase) and other cellular components known to play a role in the development of diverse cancers [11, 12, 14, 26–37]. Following how individual components of the signalling networks interact in a cancer cell is a challenge. We have developed a computational framework to study signal transduction networks based on chemical principles [38]. Through interfering with some of the network components, we identified conditions in which interventions to prevent metastasis in a model breast cancer could work (or not) [39], and suggested combination therapy for nucleophosmin anaplastic lymphoma kinase (NPM-ALK) derived anaplastic large cell lymphomas [40]. Other approaches exist to analyse signal transduction networks with different degrees of details necessary to set up a model [41, 42] from highly detailed (e.g., based on mass-action kinetics) [43–48] to qualitative Boolean models [49, 50]. In between these two extremes, semi-quantitative models make simplifying assumptions that allow to provide quantitative insights on the studied system, while requiring fewer experimental details to set them up [38–40, 51, 52]. The epidermal growth factor receptor ErbB signalling network was analysed by integrating high-level details into a mass-action-based modelling framework and therapeutic antibodies to target the cancer-related ErbB3 RTK were developed [45, 46, 48]. New combination therapies were also suggested by semi-quantitative models of AML signalling [51]. An advantage of semi-quantitative models is that their flexibility allows to take into account aspects of cellular communication networks that are increasingly recognised to play a role in cancer development and emergence of resistance to therapies. This allows to perform simulations of cell signalling that include the evolution of cancer cell populations [20, 53–55], cellular heterogeneity [56–60], and the selective pressure in the cancer microenvironment [61].
Since midostaurin has only been approved for clinical use this year and given that FLT3 +-AML is a fairly rare cancer, little is known on alternative signalling pathways or the potential for combination therapy. We applied a knowledge-based numerical simulation and sensitivity analysis to different FLT3 network models. Our aim was to assess the effect of single or dual therapeutic inhibition. This allowed us to make predictions on signalling pathways that are liable to confer resistance to therapy aimed at FLT3 +-AML. The networks were analysed with respect to apoptosis and cell proliferation, where loss of apoptosis (LOA) and gain of proliferation were viewed as cancer promoting end-states. Interestingly, it has been suggested before that apoptosis can be important for cancer progression if cell division is slow [62]. In the case of AML, however, this does not appear to be the case, i.e., inhibition of apoptosis promotes survival of the cancer cells [63].
The network of interactions in FLT3 +-AML is presented in Fig. 1. The signals are transmitted between the different components of the network through activation or inhibition, which results in two cancer-promoting end-states: increased cell proliferation and LOA. The simulations were first performed by applying a coarse-grained approach [40] whereby each node assumed one of two possible states ("low activity" or "high activity"), and exhaustive simulations were performed (see the "Methods" section).
The interaction network of FLT3. FLT3 is represented in yellow and through different nodes it transduces the signal to proliferation and apoptosis, the network's end-points that contribute to the development of AML (red nodes). Blue nodes represent potential candidates for combined inhibition therapy. Note that the two end-points yield different consequences: proliferation leads to tumour growth, whereas apoptosis limits the growth (and thus LOA leads to tumour growth)
The approach was applied to four FLT3 network variants: an intact (complete) network ("FLT3, FLT3-ligand, CDK6 and HCK contribute the most to cell proliferation and loss of apoptosis" section), a network with constitutive low activity of FLT3 (simulating FLT3 targeted inhibition, "Inhibition of FLT3 intensifies signal flow through SHC, PI3K, RAS, AKT and PDK1" section), network with constitutive low activity of CDK6 (simulating CDK6 targeted inhibition, "FLT3, SHC and PI3K are important for the control of end-points when CDK6 is inhibited" section), and constitutive low activity of FLT3 and CDK6 (dual inhibition, "Combined inhibition of FLT3 and CDK6 may be overcome through SHC and PI3K signalling" section). Sensitivity profiles of each network (central plots in Additional file 1: Figures S1–S4) were obtained by simulating all combinations of network states where each node's activity could be either high or low. These sensitivity plots represent how sensitive the end-points (proliferation or apoptosis) are to modification of the activity of each of the other nodes, which suggests potential modes of intervention. A subset of network states, corresponding to the upper and lower extremes of sensitivity profiles, represents network components that strongly contribute to change the cancer-promoting end-states (increased cellular proliferation and LOA, represented by the red and blue datapoints in the central plots of Additional file 1: Figures S1–S4, respectively). Bar plots flanking the central sensitivity plot represent the relative percentage of cases where a node was responsible for a high or low sensitivity value among all network states constituting the top/bottom-2%. In addition, the most probable signalling path from the most influential nodes to the end-points was also inferred (signal flow graphs on the top and bottom of Additional file 1: Figures S1–S4). The coarse-grained analysis was later complemented by detailed (fine-grained) simulations where few nodes assumed multiple intermediate activity values while the others assumed low (resting) activities.
FLT3, FLT3-ligand, CDK6 and HCK contribute the most to cell proliferation and loss of apoptosis
A first set of simulations was performed with the intact network in order to identify the components that contribute the most to increased cell proliferation and LOA. This analysis (Additional file 1: Figure S1) revealed that FLT3, FLT3-ligand (FLT3L), HCK (hematopoietic cell kinase) and CDK6 were those nodes that were most commonly associated with both end-points. FLT3L is a hematopoietic growth factor that activates wild-type (wt)-FLT3 [64]. Constitutively active FLT3 (due to driver mutations or ITD) does not depend on FLT3L. This is clearly shown in the simulations when examining signal transduction under the conditions during which FLT3 was the cause of an increased cell proliferation or LOA (signal flow graphs on the top- and bottom left-hand sides of Additional file 1: Figure S1). In these graphs, the statistical association of other nodes involved in the end process simultaneously with FLT3 is indicated by the graph's node sizes (the larger the stronger the association). The colour of the nodes indicates their activity contribution (the darker is the node, the stronger is its ability to deliver a signal downstream to it). As shown in these graphs, when FLT3 is highly active, HCK, CDK6 and RUNX1 (runt-related transcription factor 1) are also highly active, but FLT3L is not. The nodes that play a major role in developing a proliferative phenotype when FLT3 is turned on include HCK, CDK6, SHC, RUNX1 (top graphs in Additional file 1: Figure S1). A similar situation was observed for the graphs associated to LOA. However, as indicated by the bottom-2% bar plot, with the difference that PI3K together with its downstream nodes (AKT, PDK1, RSK (90-kDa ribosomal protein S6 kinase), CREB (cyclic adenosine monophosphate-response element binding protein), and mTOR (mammalian target of rapamycin)) can become an alternative pathway to LOA (bottom right graphs in Additional file 1: Figure S1).
Our simulations agree with experimental data obtained using AML cell lines carrying FLT3-ITD mutations which were subject to small interfering RNA (siRNA) inhibiting FLT3 or HCK. This caused a reduction in proliferation of ∼ 3–10-fold [14]. Similarly, in our coarse-grained simulations, when inhibiting in silico FLT3 we could observe a decrease in frequency of CDK6 and HCK of ∼ 10-fold in the top-2% regions of the proliferation sensitivity profile (Additional file 1: Figures S1–S2).
HCK is a non-RTK which is highly expressed and activated in some leukaemias but whose expression is reduced in others [65]. HCK can be inhibited by small molecules such as RK-20449 [66], which may have beneficial effects against several cancers [66, 67]. CDK6 is a serine/threonine protein kinase that contributes to the entry of the cell to the DNA synthesis phase (G1 →S) of the cell cycle. The CDK6 inhibitors palbociclib and ribociclib are used in the treatment of advanced-stage oestrogen receptor (ER)-positive breast cancer [68] and may be used in other cancers as well (including AML [69]). Resistance mutations to palbociclib have hitherto not been detected, perhaps due to its binding mode [70]. Thus, both CDK6 and HCK may be relevant drug targets in FLT3 +-AML in addition to FLT3. CDK6 inhibitors have the advantage that they are already approved and considered safe to use.
Inhibition of FLT3 intensifies signal flow through SHC, PI3K, RAS, AKT and PDK1
Following the simulation of the intact signalling network, a second set of coarse-grained simulations was performed, this time by inhibiting FLT3. The results of these simulations are presented in Additional file 1: Figure S2. The bar plots in the figure indicate that, upon inhibition of FLT3, the most important signal transduction components become the adapter protein Shc (SHC), the cell surface RTK AXL, and PI3K. AXL was found to be more relevant to proliferation in this case, and PI3K to LOA. Interestingly, inhibition of FLT3 removes the influence of HCK and CDK6 on the end-points. This is likely due to the feedback loop involving CDK6, FLT3 and HCK.
Simulations of the network were also used to follow on the signal flow. This analysis revealed that inhibition of FLT3 resulted in an intensification of the flow through SHC, PI3K, RAS, AKT and PDK1. Apparently, PDK1 and AKT could activate an alternative signalling pathway to stimulate proliferation (top 4th and 5th graphs in Additional file 1: Figure S2). This corroboration from the simulations is supported by qualitative experimental data within the development of BAG956 inhibitor [71, 72] However, the influence of these nodes was rather limited, as indicated by the corresponding bar plot (frequency < 10%). This could explain why the most common resistance mechanism to FLT3 inhibitors is resistance mutations. Apparently, alternative networks only partially restore the signal to proliferation and LOA.
FLT3, SHC and PI3K are important for the control of end-points when CDK6 is inhibited
Since CDK6 inhibitors are available, tolerated and hitherto not subject to resistance mutations, inhibition of CDK6 was also simulated as an alternative to inhibition of FLT3 (Additional file 1: Figure S3). Whereas inhibition of FLT3 reduced the significance of CDK6, CDK6 inhibition did not have the same influence on FLT3, which remained a key component of the network in promoting proliferation, together with its ligand, SHC, AXL and PI3K. FLT3 is represented in 22% of the simulations where proliferation was highest, and only in those cases HCK was also important (signal flow graphs, top left). Otherwise, the feedback loop involving CDK6, FLT3 and HCK, is inactive and signalling is compensated by the nodes in the lower part of the graphs (FLT3, AXL, SHC, RAS and PI3K). The involvement of these nodes compensates for the inhibition of CDK6 and suggests that proliferation can be stimulated through PI3K, SHC and AXL in alternative to the intact network signalling. Experimental data support our simulations except for the role of the SRC kinase (included in the SHC_assembly node of our model) shown to also influence CDK6, not acting only downstream of it [14]. This is possibly due to the promiscuous nature by which SH domains bind their partners to assemble diverse molecular complexes [73].
With respect to LOA, when CDK6 was inhibited, the role of FLT3 became much less important. Instead, PI3K took over. Taken together, the simulations with inhibited CDK6 indicated that PI3K, SHC and AXL became signalling alternatives for both proliferation and apoptosis. Interestingly, PI3K was suggested to be an escape mechanism for ER positive breast cancer tumours that became resistant to CDK6 inhibitors [14, 74]. This may be a common escape mechanism for CDK4/6 inhibitors.
Combined inhibition of FLT3 and CDK6 may be overcome through SHC and PI3K signalling
The simulations of FLT3 inhibited and CDK6 inhibited networks indicated that FLT3 inhibition had a larger effect than CDK6 inhibition, and that FLT3 was important for proliferation even if CDK6 was inhibited. In a final set of coarse-grained simulations, both FLT3 and CDK6 were inhibited (Additional file 1: Figure S4). By and large, the results of the simulations with dual inhibition of FLT3 and CDK6 resembled the case of FLT3 inhibition, where stimulation of proliferation and apoptosis were dependent almost exclusively on SHC and PI3K signalling. A notable difference, however, was the emergence of MEK/ERK in the proliferation bar plot (albeit at a low influence level (< 5%), see Additional file 1: Figure S4).
Sensitivity analysis suggests that the system can be controlled even if PI3K expression is increased
Fine-grained simulations are computationally demanding, but enable the calculation of sensitivity of the system with respect to small variations of the variables and identify regions that can be controlled through intervention (here, inhibition of FLT3, CDK6, or both) or where inhibition is not beneficial in terms of achieving the desired results. To this end, following earlier studies [38, 39], simulations of the networks were carried out where the activities of FLT3 and CDK6 were modified in small steps (see the "Methods" section) subject to three levels of PI3K activity i.e., normal, low (1/100 of the normal level), and high (100 × normal). The results of this analysis are shown in Fig. 2).
Fine-grained simulations. Steady-state and sensitivity of proliferation and apoptosis to variations of the activities of FLT3 and CDK6. Convex (concave) surfaces represent the sensitivity of the proliferation (apoptosis) end-point with respect to variation in FLT3 (left) or CDK6 (right) activities. The bottom projections in the lower planes (within the gray box) are set to arbitrary z-axis values and represent steady-state activities of the end-points as a function of FLT3 and CDK6 activities at low PI3K activity. These projections correspond to the sensitivity surfaces in the upper part and allow to visualise how the variables FLT3 and CDK6 depend on each other, and their influence on the network end-points (further details of such projections at different levels of PI3K activity are available in Additional file 1: Figure S5–S6). The red star symbols indicate the point where sufficient inhibition of FLT3 (10-fold inhibition from the maximum) and CDK6 (15-fold inhibition from the maximum) drive the system to a controllable region of intermediate steady-state levels of both proliferation and apoptosis. A cyan segment connects this point through the different complementary quantities represented, i.e., the sensitivities at different PI3K activities in the upper surfaces, and the corresponding end-points' steady-state activities in the lower projections. This multidimensional representation allows to appreciate both the steady-state activity of the variables (which would correspond to experimentally measurable quantities such as tumour markers, RNA or proteins), as well as their sensitivity to changes in the other variables' activities. The left and right plots can be compared to top-view heat maps for proliferation (Additional file 1: Figure S5) and apoptosis (Additional file 1: Figure S6) which represent steady-state and sensitivity. PCA of the network variables under different PI3K independent activities is shown as a function of PI3K activities in Additional file 1: Figures S7–S9
Analysis of the fine-grained simulations revealed that under the right conditions, the system could remain under control with respect to apoptosis and proliferation. A controllable region (sensitivity higher or lower than zero) was observed in the low to medium range of FLT3 and CDK6 activities (as shown by the sensitivity surfaces in Fig. 2, and Additional file 1: Figures S5–S6). Beyond that threshold (i.e., where sensitivity is close to zero), the system lost controllability to external stimuli, and a high proliferation regime became dominant (as presented by the upper x-y-plane projections in plots of Fig. 2). Loss of controllability of LOA was observed at the same time, but to a smaller extent (as shown by the lower x-y-plane projections in Fig. 2 and more clearly in Additional file 1: Figure S6). Increasing the activity of PI3K decreased the end-points' sensitivity to changes in the activities of FLT3 and CDK6. This made the system less controllable by external stimuli (Fig. 2). Moreover, once high proliferation and LOA are reached, the simulations predicted that reverting back to a physiological, healthy regime will be difficult if at all possible through inhibition of FLT3 and CDK6.
Unexpected connections between the nodes revealed by principal component analysis
Principal component analysis (PCA) was used to detect co-activity (when it was applied to steady-state values) and co-regulation (when it was applied to sensitivities) patterns between the signalling components in fine-grained simulations under different PI3K activities as above. The results of this analysis are presented in Additional file 1: Figures S7–S9 which correspond to the red, green, and blue curves, respectively, in Fig. 2.
In the steady-state PCA, FLT3 and CDK6 were clustered together because of the external β tuning (see Additional file 1: Table S1). RAF, SHC and HCK were clustered with the proliferation end-point at low and intermediate PI3K activity, while at high PI3K activity proliferation merged with a neighbouring cluster composed of STAT, PI3K and RAS. This suggests that proliferation becomes driven by STAT and RAS upon an increase in PI3K activity. The apoptosis end-point clustered with AXL, FLT3L, 4E-BP1 (eukaryotic initiation factor 4E-binding protein) and BCL2-BAD at low PI3K activity. It merged with other network components into larger clusters as the activity of PI3K was increased. This suggests that control of apoptosis with increased PI3K activity becomes distributed over multiple nodes besides the ones strictly belonging to the apoptosis signalling path (S6K and BCL2-BAD).
The sensitivity PCA indicated that FLT3 clustered with RUNX1 at all levels of PI3K, while CDK6 clustered with AXL at intermediate and high PI3K activity. Together, they were associated with apoptosis among other components (BCL2-BAD, 4E-BP1 and FLT3L at low PI3K). Interestingly, a cluster composed of RAS, SHC and HCK became isolated from the other variables at intermediate and high PI3K activity (increasing hierarchical clustering height) whereas the same components clustered with PDK1, PI3K and AKT at low activity of PI3K. This suggests that with increasing activity of PI3K, co-regulatory patterns become more defined in grouping FLT3 with RUNX1, CDK6 with AXL, PI3K with PDK1, and AKT, RAS with SHC and HCK. In contrast, the end-points proliferation and apoptosis clustered in small groups under low PI3K but merged into larger ones under higher activity levels of PI3K. This suggests that regulation of the end nodes at high activity of PI3K became distributed over a larger number of signalling components, which explains the loss of sensitivity observed in the sensitivity profiles as a function of increasing PI3K (Fig. 2).
Combined inhibition of FLT3 and CDK6 can be beneficial
The feedback between FLT3 and CDK6 (Fig. 1) implies an interdependent regulation between FLT3 and CDK6, which has the effect of restricting the activity of these two components of the network to a similar range (as indicated by the diagonal narrow steady-state activity projections in the x-y-plane of Fig. 2 that expand in correspondence to high activity levels of FLT3 and CDK6). This pattern suggests that a combined, partial inhibition of FLT3 and CDK6 would be sufficient to restrict the system to a sensitive area of the regulation space represented in Fig. 2. More precisely, a 10-fold inhibition of FLT3 from its maximum activity level, combined with a 15-fold CDK6 inhibition from maximal CDK6 activity, would suffice to drive the system to a sensitive region of intermediate steady-state levels of both proliferation and apoptosis. This point (indicated by a red star symbol in Fig. 2, and Additional file 1: Figures S5–S6) corresponds to the transition zone between the region where sensitivity surfaces are close to zero, and therefore the system is poorly controllable, and the region of higher controllability where sensitivity surfaces have positive or negative values. Stronger inhibition of either or both components, is predicted to further decrease the activities of the cancer-driving end-points (in a synergistic way, due to the feedback loop involving FLT3, CDK6 and HCK). Combination of FLT3 and CDK6 inhibitors in smaller doses than required for individual therapy may thus be sufficient or even superior solution in terms of efficacy and minimisation of secondary effects.
Simulations of the network of interactions based on the current knowledge of FLT3 +-AML were carried out in order to identify potential routes of resistance besides FLT3 mutations and examine the potential for combined inhibition of FLT3 and CDK6. Although both FLT3 and CDK6 inhibitors are available, resistance and intolerance limit their benefits. Particularly, CDK6 inhibitors may not be tolerated due to toxicities [75]. FLT3 inhibitors have limited use due to the emergence of mutations that make the drugs less efficient in controlling the activity of FLT3.
The simulations suggested that upon FLT3 inhibition, signal flow through SHC, PI3K, RAS, AKT and PDK1 becomes more intense and may provide alternative paths to maintain sustained cellular proliferation and reduced apoptosis. Inhibition of CDK6 was of little use in itself since FLT3 could still drive cell proliferation. Combined inhibition of FLT3 and CDK6 reduced the severeness of cancer-promoting processes, but could still be bypassed by PI3K-mediated signalling involving the nodes PI3K, SHC and AXL resulting in potential treatment escape routes. The simulations indicated that FLT3, SHC and PI3K are important for the end-points' control when CDK6 is inhibited. The analysis further suggests the existence of an optimal combination of FLT3 and CDK6 inhibitors that would be efficient even if FLT3 is somewhat more active due to resistance mutations and may require lower doses of CDK6 than necessary for inhibition of CDK6 alone.
FLT3 signalling network
A knowledge-based network model of FLT3 and its principal interaction partners was assembled by combining the experimental information summarised in references [11, 12, 14, 26–37]. FLT3 was shown to associate in vitro with the SHC complex (composed of SHC, CBL (a proto-oncogene), SHIP (SH2-domain-containing inositol phosphatase), SHP2 (SH2-domain-containing protein tyrosine phosphatase 2), GAB2 (GRB2-binding protein) and GRB2-SOS (son of sevenless) and lumped together in a single network node denoted as "SHC_assembly" in the network scheme (see Fig. 1), or as "SHC" (in Additional file 1: Figures S7–S9) [26–30]. Downstream of the SHC complex, the RAS → RAF → MEK/ERK pathway influences the activity of genes involved in stimulating cellular proliferation and repressing apoptosis. These cancer-driving processes are lumped together in two separate network end-points and are denoted in the network scheme as "proliferation" and "apoptosis". ETS domain-containing protein (ELK), p38 and STAT mediate signalling from RAF/MEK/ERK to the transcription of genes involved in proliferation [11, 31] together with the PI3K → AKT pathway which regulates apoptosis as well through mTOR, S6K and BCL2-BAD [12, 32, 33]. The same pathway also bridges proliferation with apoptosis via PDK1, RSK and CREB [11, 12, 34]. Finally, interactions were included to take into account the regulation between FLT3, HCK and CDK6 [14, 35], as well as the role of RUNX1 and AXL kinases [36, 37].
Network simulation and sensitivity analysis
Signalling in the FLT3 networks (intact network, inhibited FLT3, inhibited CDK6, inhibited FLT3 and CDK6) was simulated with the computational method developed by us previously [38, 39]. Signalling networks were constructed as interaction diagrams composed of nodes and edges. The nodes represented signalling components as a set of ordinary differential equations (ODEs). Edges represented the interaction links between the components (modelled as empirical Hill-type transfer functions). The system is described as a network of interacting components that evolve in time according to the ODEs. Every node in the model is parametrised by the parameters β and δ and every link by α,γ and η (see Table 1), resulting in a set of ODEs for the nodes {X,Y,...}:
$$ \left\{\begin{array}{ll} dX/dt = - \delta_{X}X + (\beta_{X} + \sum_{i} {Act}_{i}) \cdot \Pi_{j} {Inh}_{j} \\ dY/dt = - \delta_{Y}Y + (\beta_{Y} + \sum_{i} {Act}_{i}) \cdot \Pi_{j} {Inh}_{j} \\ \cdots \\ \end{array}\right. $$
Table 1 Model parameters
The parameter β accounts for the basal activity as a zero-order term added to each ODE, and δ for the decay of the biological species as a first-order decay term subtracted from the ODEs. We refer to the activity of a protein in analogy to the activity of a chemical solute, i.e., it corresponds to the effective concentration of a protein in its biologically active conformation. The biological activity cannot be compared directly with experiments and is given in arbitrary units that can be roughly translated to a signalling protein that is abundant in the cell (i.e., in the order of 1 μM) [76]. Values for end-points (proliferation and apopotosis) can only be appreciated by comparison, and we assume that any treatment would aspire to keep proliferation as low as possible and apopotosis as high as in healthy physiological conditions.
The Hill-type regulatory functions used to link the nodes to each other are defined according to Eqs. 2 and 3 for activation and inhibition, respectively. Arrows representing activation (→) and inhibition (\(\dashrightarrow \)) correspond to the network scheme in Fig. 1.
$$\begin{array}{@{}rcl@{}} Act(X \longrightarrow Y;\alpha,\gamma,\eta) = \alpha\frac{X^{\eta}}{X^{\eta}+\gamma^{\eta}} \end{array} $$
$$\begin{array}{@{}rcl@{}} Inh(X \dashrightarrow Y;\alpha,\gamma,\eta) = \alpha\frac{\gamma^{\eta}}{X^{\eta}+\gamma^{\eta}} \end{array} $$
The Hill-exponent η is an empirical parameter widely used to quantify nonlinear signalling interaction (e.g., positive/negative binding cooperativity) [77] and was kept equal to one in the present work. The parameter γ establishes a threshold of activation along the abscissa and α is a multiplicative scaling factor and have been set to one throughout the current work. When multiple links point to a single node, activation functions are added to each other while inhibition functions are multiplied by the current level of activity (see references [78, 79]).
This modelling framework enabled the integration of experimental information in a straightforward way using a well-established formalism derived from classical enzyme kinetics and test different model variations, such as the (combined) inhibition of FLT3/CDK6 in the model. This approach requires only the knowledge necessary to set up Boolean models (where interaction is assumed to be binary, i.e., activation or inhibition). Yet it provides quantitative insights on the studied signalling networks, taking into account nonlinear signalling effects such as feedbacks, pleiotropy and redundancy.
The simulation procedure yielded steady-state activity levels of the different network components according to a given set of parameters. The steady-state of the ODEs system was calculated numerically using the GSL library [80] (by use of gsl_odeiv2_step_rk4, which employs the explicit 4th order Runge-Kutta algorithm). With this procedure the steady-state values of each node is obtained for a given parameter set. The range of independent activities of the different network components (β) used, is summarised in Additional file 1: Table S1. Sensitivity analysis was applied to the resulting steady-state activities by calculating the sensitivity corresponding to each parameter change in the combinatorial parameter space according to
$$ {{\varepsilon}}^{Y}_{\phi} = \frac{\partial [ln(Y)]}{\partial [ln(\phi)]} = \frac{\phi}{Y} \cdot \frac{\partial Y}{\partial \phi} \approx \frac{\Delta [ln(Y)]}{\Delta [ln(\phi)]} = \frac{ln(Y_{i} / Y_{j})}{ln(\phi_{i} / \phi_{j})} $$
where the sensitivity \({{\varepsilon }}^{Y}_{\phi }\) is represented as a function of the input parameter set ϕ and of the output variable Y. Equation 4 expresses the relative change of activity in the nodes as a function of varying parameter sets. Two conditions (i and j) are evaluated at each step of the computational procedure according to the right-hand approximation. Here, the conditions are represented by vectors of steady-state values (Y i and Y j ) that correspond to the nodes' activities and are determined by the parameter sets (ϕ i and ϕ j ).
In order to reveal co-activity and co-regulatory patterns between the nodes in the multi-dimensional simulation data, the resulting steady-state activity and sensitivity values were further explored through multivariate analysis (see "Principal component analysis and hierarchical clustering" section).
Steady-state simulations and sensitivity analysis were carried out using parallel computational architectures in order to screen a large number of conditions and identify key control points of the different networks. This enabled us to methodically characterise the effect of inhibition of FLT3 and/or CDK6 in the different network models.
Sensitivity analysis in coarse-grained simulations
Coarse-grained simulations consisted of enumerating all combinations of network states with high (β=0.1) or low (β=0.001) initial activity state (see Additional file 1: Table S1). Each pair of combinations that differed by a single parameter (i.e., where the network state differed by the activity of a single node), was used to compute the sensitivity (Eq. 4) of the given modification according to the method used in reference [40], i.e.,
$$ {}{{\varepsilon}}^{SS(N_{i})_{\beta(N_{j})=low} \: \rightarrow \: SS(N_{i})_{\beta(N_{j})=high} }_{{ \beta(N_{j})=low} \: \rightarrow \: \beta(N_{j})=high } = \frac{ ln \bigg \{ \frac{SS(N_{i})_{\beta(N_{j})=high} }{ SS(N_{i})_{\beta(N_{j})=low}} \bigg \} }{ ln \bigg \{ \frac{{\beta(N_{j})=high} }{{\beta(N_{j})=low}} \bigg \} } $$
where SS(N) denotes the steady-state activity of a node N and β(N) its independent activity state. The arrow (→) indicates a change in condition.
Without considering the combined activity change of multiple control nodes simultaneously, but only the changes occurring subsequently one after another (as it would be expected by point mutations affecting the activity of a protein), Eq. 5 allows to calculate the sn conditions that represent all possible states of the network (s is the number of states a node can assume, n is the number of nodes in the network).
Sensitivity is subsequently computed for each pair of simulated conditions that differ by a single parameter (i.e., pair of simulations where the network states are identical except for a single node that is low in the first simulation and high in the second, or vice versa). This resulted in a set of calculated sensitivities derived from the coarse-grained simulations that comprises \(s^{n} \cdot \frac {n}{s} \cdot (s-1)\) sensitivity values from which sensitivity profiles and signal flow graphs are computed (see "Sensitivity profiles and signal flow graphs" section).
Each sensitivity value expressed the strength of a link between two components of the network, regardless of the degree of connection (directly or through intermediates). A positive value for the sensitivity between two nodes (A → B) indicated that upon the increase of the activity of A, B's activity would also increase. Similarly, a negative sensitivity indicates that upon an increase of A's activity, B's activity would decrease. Sensitivity values close to 0 indicates independence between nodes.
Sensitivity profiles and signal flow graphs
We tested each possible combination of the network nodes (high or low initial activity), for each network simulated (intact, inhibited FLT3, inhibited CDK6, inhibited FLT3 and CDK6). The results are presented by "sensitivity profile plots" and "signal flow graphs", as described below.
Sensitivity profile.
The central sensitivity profile plots in Additional file 1: Figures S1–S4 represent the sensitivity calculated for each network simulated by coarse-grained simulations, ranked in ascending order. The majority of the combinations had no effect on the network end-points. These are represented by the flat part of the plots (black for the end-point proliferation, grey for the end-point apoptosis). A minority of the sensitivity values were far from zero: the red dots represented positive values for proliferation, whereas the blue dots represented negative values for apoptosis (in this case, we consider that the cancer-driving process is LOA, therefore we observe a negative sensitivity). These values far from zero represent a subset of nodes which, upon their increased activity, significantly contribute to activate proliferation or inhibit apoptosis.
This subset of nodes responsible for high and low sensitivities (top-2% (red) and bottom-2% (blue) portion for proliferation and apoptosis, respectively) were further analysed to identify how strongly certain nodes were associated to proliferation and apoptosis. The bar plots connected to top-/bottom-2% regions of the sensitivity profile show the frequency of the nodes, that upon switching from low to high activity, contribute to increase proliferation (red) or decrease apoptosis (blue).
Signal flow.
Signal flow graphs connected to the bars of the bar plots represent how the signal travels from the control node (node indicated on the bar) to the end-points (top and bottom graphs in Additional file 1: Figures S1–S4) according to the method described in reference [40]. Briefly, we define "control nodes" as the nodes that, upon a change in their activity (owing to external or internal perturbations), would cause changes in the activity of the other nodes in the network. While end-point nodes contribute to the development of AML (red nodes in Fig. 1proliferation and apoptosis).
In order to examine pathways that a signal is more prone to follow, due to the network topology, from a control node to the network end-points, the proportions of the occurrence of high and low activity for each node in coarse-grained simulations were calculated when the endpoints were highly active. If a node has no correlation with an endpoint, the corresponding proportion is expected to be ∼ 50%. The larger the deviation from this proportion, the larger the involvement of the node within the network.
Any individual node's activity change (from low to high) influences not only the activity of the endpoints but also that of all other nodes. The average activity of any node i as a consequence of an activity change of the control node, j, is:
$$ \widehat{\Upsilon_{i, \beta(j)=high}}=\overline{SS_{i,\beta(j)=0.1}}, j \ne i $$
where the bar denotes an average and SSi,β(j) the steady-state of node i when the control node j is set to an independent activity of β(j). Similarly, \(\widehat {\Upsilon _{i, \beta (j)=low}}\) is calculated as:
$$ \widehat{\Upsilon_{i, \beta(j)=low}}=\overline{SS_{i,\beta(j)=0.001}}, j \ne i $$
The ratio \(\widehat {\Upsilon _{i, \beta (j)=high}} / \widehat {\Upsilon _{i, \beta (j)=low}}\) represents the effect of the control node's independent activity change (β(j)=low→high) on the steady-state activity of any other node (SS i ).
Upon activation of the control node, the statistical association of other nodes that are influenced is represented by the graph's node area (the larger the area the stronger the association). The colour of the nodes indicates their activity contribution (the darker is a node, the higher is its \(\widehat {\Upsilon _{i, \beta (j)=high}} / \widehat {\Upsilon _{i, \beta (j)=low}}\) ratio, and thus the stronger is the signal it can deliver downstream to it).
Sensitivity analysis in fine-grained simulations
Based on the same mathematical principles as for in the coarse-grained simulations, in fine-grained simulations the majority of the network components were assumed to have a low (resting) activity, while few nodes, identified by coarse-grained simulations as relevant for controlling the network behaviour, were varied over a range of activities (β) in small steps (as explained in reference [40] and expanded in Additional file 1: Table S1). This way, a more in-depth, quantitative understanding of the control nodes to the network endpoints is achieved (see Fig. 2). This yielded a more detailed characterisation of those nodes that were critical for controlling the network end-points and consequently relevant for cancer development.
Principal component analysis and hierarchical clustering
PCA was used as a multivariate analysis to reduce dimensionality of the fine-grained simulations (the prcomp function of R was used as a part of the computational method developed by us previously [38, 39]). It was applied to visualise PCA loadings (corresponding to the network components) of steady-state and sensitivity data on a two-component space (as presented in the top panels in Additional file 1: Figures S7–S9). PCA loadings were further classified using hierarchical clustering (the hclust function of R was used) and represented in a tree-like structure (dendrogram) whose branches grouped network components according to their similarity over the different simulations (displayed in the bottom dendrograms of Additional file 1: Figures S7–S9).
Model potential and limitations
A limitation of our approach consists of the fact that quantitative information cannot be obtained for all proteins or complexes of a living model. This prevents precise predictions of the model kinetics and does not allow to take into account time-related properties of the dynamical system such as oscillations [81]. To estimate such quantities is challenging since it can only be determined if a large number of microscopic parameters are available experimentally, while the remaining, unknown parameters are extrapolated by computational methods. This enables to set up mass-action-based models of remarkable predictive power for model systems that were specifically tailored. Examples of this approach could reveal crucial insights for the development of targeted inhibitors [43–48]. Unfortunately, the technical challenges to obtain such high-quality information restrict its applicability to few cellular signalling systems. On the other extreme there are modelling techniques that require only limited, approximate information to make useful predictions based only on node connectivities (e.g., Boolean networks, Petri nets) [49, 50]. In between, our model uses the assumption of steady-state between network components. It only requires minimal information to set up Boolean models, but has the advantage of assuming continuous regulation between nodes, although implemented in a more approximate way compared to detailed mass-action-based models. The advantage of our proposed model is that it enables to study signal transduction pathways for which only sparse information is available, consequently making poorly described diseases networks tractable by simulation. This opens the way for computer-assisted analysis to a majority of complex diseases for which only limited molecular details are available.
Parametric and structural uncertainty were studied in our previous work. The first denotes the changes in the network nodes' activity as parameters are varied, while the second considers the network qualitative behaviour as a function of the number of nodes considered (e.g., by approximating multiple signalling component as a merged entity). We showed that consistent results were obtained comparing simulations in which parameters were single-valued, to simulations where a numerical ranges was used for each parameter screened. The method demonstrated to be robust against a wide range of parameter variation and therefore proving reliable towards parametric uncertainty [38]. We also showed that we could obtain equivalent results by adding ∼ 50% of nodes and links to a network (note that robustness tests consider highly robust a network able to tolerate variation of 5-20% in the number of the nodes [82, 83]). This proves the method robust with respect to structural uncertainty [39].
Of note, our model can be refined once additional experimental evidence will be made available. Both the elucidation of new signalling pathways interacting with components of our network model (e.g., from omics experiments), as well as the effect of therapeutic inhibitors (and combinations thereof), is information that can be easily integrated to our current model.
Acute lymphoblastic leukaemia
AKT:
AML:
BCL2-BAD:
BCL2-family protein – BCL2 antagonist of cell death
CDK6:
Cyclin dependent kinase 6
CREB:
Cyclic adenosine monophosphate-response element binding protein
ELK:
ETS domain-containing protein FDA: US food and drug administration
FLT3:
FLT3L:
FLT3-ligand
GAB2:
GRB2-binding protein
GRB2-SOS:
Son of sevenless
HCK:
Hematopoietic cell kinase
ITD:
Internal tandem duplication
LOA:
loss of apoptosis
MEK/ERK:
Mitogen-activated ERK kinase / extracellular signal-regulated kinase
mTOR:
NPM-ALK:
Nucleophosmin anaplastic lymphoma kinase
PCA:
Principal component analysis
PI3K:
Phosphoinositide 3 kinase
ODEs:
RSK:
90-kDa ribosomal protein S6 kinase
RTKs:
SHC:
SH2-containing sequence proteins
SH2-domain-containing inositol phosphatase
SHP2:
SH2-domain-containing protein tyrosine phosphatase 2
STAT:
Signal transducer and activators of transcription
Friedman R, Caflisch A. Discovery of plasmepsin inhibitors by fragment-based docking and consensus scoring. ChemMedChem. 2009; 4:1317–26.
Kubinyi H. Qsar and 3d qsar in drug design part 1: methodology. Drug Discov Today. 1997; 2(11):457–67.
Kubinyi H. Qsar and 3d qsar in drug design part 2: applications and problems. Drug Discov Today. 1997; 2(12):538–46.
Alvarsson J, Lampa S, Schaal W, Andersson C, Wikberg JE, Spjuth O. Large-scale ligand-based predictive modelling using support vector machines. J Cheminform. 2016; 8:39.
Lampa S, Alvarsson J, Spjuth O. Towards agile large-scale predictive modelling in drug discovery with flow-based programming design principles. J Cheminform. 2016; 8:67.
van de Waterbeemd H, Gifford E. ADMET in silico modelling: towards prediction paradise?. Nat Rev Drug Discov. 2003; 2(3):192–204.
Datta S, Grant DJ. Crystal structures of drugs: advances in determination, prediction and engineering. Nat Rev Drug Discov. 2004; 3(1):42–57.
Brown ED, Wright GD. Antibacterial drug discovery in the resistance era. Nature. 2016; 529(7586):336–43.
Gjini E, Brito PH. Integrating Antimicrobial Therapy with Host Immunity to Fight Drug-Resistant Infections: Classical vs Adaptive Treatment. PLoS Comput Biol. 2016; 12(4):1004857.
Friedman R. Drug resistance missense mutations in cancer are subject to evolutionary constraints. PLoS ONE. 2013; 8(12):82059.
Stirewalt DL, Radich JP. The role of FLT3 in haematopoietic malignancies. Nat Rev Cancer. 2003; 3(9):650–65.
Grafone T, Palmisano M, Nicci C, Storti S. An overview on the role of FLT3-tyrosine kinase receptor in acute myeloid leukemia: biology and treatment. Oncol Rev. 2012; 6(1):8.
Gschwind A, Fischer OM, Ullrich A. The discovery of receptor tyrosine kinases: targets for cancer therapy. Nat Rev Cancer. 2004; 4(5):361–70.
Lopez S, Voisset E, Tisserand JC, Mosca C, Prebet T, Santamaria D, Dubreuil P, De Sepulveda P. An essential pathway links FLT3-ITD, HCK and CDK6 in acute myeloid leukemia. Oncotarget. 2016; 7(32):51163–73.
Frohling S, Scholl C, Levine RL, Loriaux M, Boggon TJ, Bernard OA, Berger R, Dohner H, Dohner K, Ebert BL, Teckie S, Golub TR, Jiang J, Schittenhelm MM, Lee BH, Griffin JD, Stone RM, Heinrich MC, Deininger MW, Druker BJ, Gilliland DG. Identification of driver and passenger mutations of FLT3 by high-throughput DNA sequence analysis and functional assessment of candidate alleles. Cancer Cell. 2007; 12(6):501–13.
Yamamoto Y, Kiyoi H, Nakano Y, Suzuki R, Kodera Y, Miyawaki S, Asou N, Kuriyama K, Yagasaki F, Shimazaki C, Akiyama H, Saito K, Nishimura M, Motoji T, Shinagawa K, Takeshita A, Saito H, Ueda R, Ohno R, Naoe T. Activating mutation of D835 within the activation loop of FLT3 in human hematologic malignancies. Blood. 2001; 97(8):2434–9.
Whitman SP, Ruppert AS, Radmacher MD, Mrozek K, Paschka P, Langer C, Baldus CD, Wen J, Racke F, Powell BL, Kolitz JE, Larson RA, Caligiuri MA, Marcucci G, Bloomfield CD. FLT3 D835/I836 mutations are associated with poor disease-free survival and a distinct gene-expression signature among younger adults with de novo cytogenetically normal acute myeloid leukemia lacking FLT3 internal tandem duplications. Blood. 2008; 111(3):1552–9.
Matsuno N, Nanri T, Kawakita T, Mitsuya H, Asou N. A novel FLT3 activation loop mutation N841K in acute myeloblastic leukemia. Leukemia. 2005; 19(3):480–1.
Kindler T, Breitenbuecher F, Kasper S, Estey E, Giles F, Feldman E, Ehninger G, Schiller G, Klimek V, Nimer SD, Gratwohl A, Choudhary CR, Mueller-Tidow C, Serve H, Gschaidmeier H, Cohen PS, Huber C, Fischer T. Identification of a novel activating mutation (Y842C) within the activation loop of FLT3 in patients with acute myeloid leukemia (AML). Blood. 2005; 105(1):335–40.
Friedman R. Drug resistance in cancer: molecular evolution and compensatory proliferation. Oncotarget. 2016; 7(11):11746–55.
Gallogly MM, Lazarus HM. Midostaurin: an emerging treatment for acute myeloid leukemia patients. J Blood Med. 2016; 7:73–83.
Heidel F, Solem FK, Breitenbuecher F, Lipka DB, Kasper S, Thiede MH, Brandts C, Serve H, Roesel J, Giles F, Feldman E, Ehninger G, Schiller GJ, Nimer S, Stone RM, Wang Y, Kindler T, Cohen PS, Huber C, Fischer T. Clinical resistance to the kinase inhibitor PKC412 in acute myeloid leukemia by mutation of Asn-676 in the FLT3 tyrosine kinase domain. Blood. 2006; 107(1):293–300.
Williams AB, Nguyen B, Li L, Brown P, Levis M, Leahy D, Small D. Mutations of FLT3/ITD confer resistance to multiple tyrosine kinase inhibitors. Leukemia. 2013; 27(1):48–55.
Rathert P, Roth M, Neumann T, Muerdter F, Roe JS, Muhar M, Deswal S, Cerny-Reiterer S, Peter B, Jude J, Hoffmann T, Boryń LM, Axelsson E, Schweifer N, Tontsch-Grunt U, Dow LE, Gianni D, Pearson M, Valent P, Stark A, Kraut N, Vakoc CR, Zuber J. Transcriptional plasticity promotes primary and acquired resistance to BET inhibition. Nature. 2015; 525(7570):543–7.
Fong CY, Gilan O, Lam EY, Rubin AF, Ftouni S, Tyler D, Stanley K, Sinha D, Yeh P, Morison J, Giotopoulos G, Lugo D, Jeffrey P, Lee SC, Carpenter C, Gregory R, Ramsay RG, Lane SW, Abdel-Wahab O, Kouzarides T, Johnstone RW, Dawson SJ, Huntly BJ, Prinjha RK, Papenfuss AT, Dawson MA. BET inhibitor resistance emerges from leukaemia stem cells. Nature. 2015; 525(7570):538–42.
Rottapel R, Turck CW, Casteran N, Liu X, Birnbaum D, Pawson T, Dubreuil P. Substrate specificities and identification of a putative binding site for PI3K in the carboxy tail of the murine Flt3 receptor tyrosine kinase. Oncogene. 1994; 9(6):1755–65.
Dosil M, Wang S, Lemischka IR. Mitogenic signalling and substrate specificity of the Flk2/Flt3 receptor tyrosine kinase in fibroblasts and interleukin 3-dependent hematopoietic cells. Mol Cell Biol. 1993; 13(10):6572–85.
Marchetto S, Fournier E, Beslu N, Aurran-Schleinitz T, Dubreuil P, Borg JP, Birnbaum D, Rosnet O. SHC and SHIP phosphorylation and interaction in response to activation of the FLT3 receptor. Leukemia. 1999; 13(9):1374–82.
Zhang S, Mantel C, Broxmeyer HE. Flt3 signaling involves tyrosyl-phosphorylation of SHP-2 and SHIP and their association with Grb2 and Shc in Baf3/Flt3 cells. J Leukoc Biol. 1999; 65(3):372–80.
Zhang S, Broxmeyer HE. p85 subunit of PI3 kinase does not bind to human Flt3 receptor, but associates with SHP2, SHIP, and a tyrosine-phosphorylated 100-kDa protein in Flt3 ligand-stimulated hematopoietic cells. Biochem Biophys Res Commun. 1999; 254(2):440–5.
Srinivasa SP, Doshi PD. Extracellular signal-regulated kinase and p38 mitogen-activated protein kinase pathways cooperate in mediating cytokine-induced proliferation of a leukemic cell line. Leukemia. 2002; 16(2):244–53.
Martelli AM, Evangelisti C, Chiarini F, McCubrey JA. The phosphatidylinositol 3-kinase/Akt/mTOR signaling network as a therapeutic target in acute myelogenous leukemia patients. Oncotarget. 2010; 1(2):89–103.
Altman JK, Sassano A, Platanias LC. Targeting mTOR for the treatment of AML, New agents and new directions. Oncotarget. 2011; 2(6):510–7.
Anjum R, Blenis J. The RSK family of kinases: emerging roles in cellular signalling. Nat Rev Mol Cell Biol. 2008; 9(10):747–58.
Uras IZ, Walter GJ, Scheicher R, Bellutti F, Prchal-Murphy M, Tigan AS, Valent P, Heidel FH, Kubicek S, Scholl C, Frohling S, Sexl V. Palbociclib treatment of FLT3-ITD+ AML cells uncovers a kinase-dependent transcriptional regulation of FLT3 and PIM1 by CDK6. Blood. 2016; 127(23):2890–2902.
Hirade T, Abe M, Onishi C, Taketani T, Yamaguchi S, Fukuda S. Internal tandem duplication of FLT3 deregulates proliferation and differentiation and confers resistance to the FLT3 inhibitor AC220 by Up-regulating RUNX1 expression in hematopoietic cells. Int J Hematol. 2016; 103(1):95–106.
Park IK, Mundy-Bosse B, Whitman SP, Zhang X, Warner SL, Bearss DJ, Blum W, Marcucci G, Caligiuri MA. Receptor tyrosine kinase Axl is required for resistance of leukemic cells to FLT3-targeted therapy in acute myeloid leukemia. Leukemia. 2015; 29(12):2382–9.
Buetti-Dinh A, Pivkin IV, Friedman R. S100A4 and its role in metastasis – computational integration of data on biological networks. Mol Biosyst. 2015; 11(8):2238–46.
Buetti-Dinh A, Pivkin IV, Friedman R. S100A4 and its role in metastasis – simulations of knockout and amplification of epithelial growth factor receptor and matrix metalloproteinases. Mol Biosyst. 2015; 11(8):2247–54.
Buetti-Dinh A, O'Hare T, Friedman R. Sensitivity Analysis of the NPM-ALK Signalling Network Reveals Important Pathways for Anaplastic Large Cell Lymphoma Combination Therapy. PLoS ONE. 2016; 11(9):0163011.
Karlebach G, Shamir R. Modelling and analysis of gene regulatory networks. Nat Rev Mol Cell Biol. 2008; 9(10):770–80.
Fisher J, Henzinger TA. Executable cell biology. Nat Biotechnol. 2007; 25(11):1239–49.
Tigges M, Marquez-Lago TT, Stelling J, Fussenegger M. A tunable synthetic mammalian oscillator. Nature. 2009; 457(7227):309–12.
Zavala E, Marquez-Lago TT. Delays induce novel stochastic effects in negative feedback gene circuits. Biophys J. 2014; 106(2):467–78.
Schoeberl B, Pace EA, Fitzgerald JB, Harms BD, Xu L, Nie L, Linggi B, Kalra A, Paragas V, Bukhalid R, Grantcharova V, Kohli N, West KA, Leszczyniecka M, Feldhaus MJ, Kudla AJ, Nielsen UB. Therapeutically targeting ErbB3: a key node in ligand-induced activation of the ErbB receptor-PI3K axis. Sci Signal. 2009; 2(77):31.
Chen WW, Schoeberl B, Jasper PJ, Niepel M, Nielsen UB, Lauffenburger DA, Sorger PK. Input-output behavior of ErbB signaling pathways as revealed by a mass action model trained against dynamic data. Mol Syst Biol. 2009; 5:239.
Kirouac DC, Schaefer G, Chan J, Merchant M, Orr C, Huang SA, Moffat J, Liu L, Gadkar K, Ramanujan S. Clinical responses to ERK inhibition in BRAFV600E-mutant colorectal cancer predicted using a computational model. NPJ Syst Biol Appl. 2017; 3:14.
Kirouac DC, Du JY, Lahdenranta J, Overland R, Yarar D, Paragas V, Pace E, McDonagh CF, Nielsen UB, Onsum MD. Computational modeling of ERBB2-amplified breast cancer identifies combined ErbB2/3 blockade as superior to the combination of MEK and AKT inhibitors. Sci Signal. 2013; 6(288):68.
Feiglin A, Hacohen A, Sarusi A, Fisher J, Unger R, Ofran Y. Static network structure can be used to model the phenotypic effects of perturbations in regulatory networks. Bioinformatics. 2012; 28(21):2811–8.
Ruths D, Muller M, Tseng JT, Nakhleh L, Ram PT. The signaling petri net-based simulator: a non-parametric strategy for characterizing the dynamics of cell-specific signaling networks. PLoS Comput Biol. 2008; 4(2):1000005.
Silverbush D, Grosskurth S, Wang D, Powell F, Gottgens B, Dry J, Fisher J. Cell-Specific Computational Modeling of the PIM Pathway in Acute Myeloid Leukemia. Cancer Res. 2017; 77(4):827–38.
Hall BA, Piterman N, Hajnal A, Fisher J. Emergent stem cell homeostasis in the C, elegans germline is revealed by hybrid modeling. Biophys J. 2015; 109(2):428–38.
Foo J, Liu LL, Leder K, Riester M, Iwasa Y, Lengauer C, Michor F. An Evolutionary Approach for Identifying Driver Mutations in Colorectal Cancer. PLoS Comput Biol. 2015; 11(9):1004350.
Foo J, Michor F. Evolution of acquired resistance to anti-cancer therapy. J Theor Biol. 2014; 355:10–20.
Mumenthaler SM, Foo J, Leder K, Choi NC, Agus DB, Pao W, Mallick P, Michor F. Evolutionary modeling of combination treatment strategies to overcome resistance to tyrosine kinase inhibitors in non-small cell lung cancer. Mol Pharm. 2011; 8(6):2069–79.
Vogelstein B, Papadopoulos N, Velculescu VE, Zhou S, Diaz LA, Kinzler KW. Cancer genome landscapes. Science. 2013; 339(6127):1546–58.
Kreso A, O'Brien CA, van Galen P, Gan OI, Notta F, Brown AM, Ng K, Ma J, Wienholds E, Dunant C, Pollett A, Gallinger S, McPherson J, Mullighan CG, Shibata D, Dick JE. Variable clonal repopulation dynamics influence chemotherapy response in colorectal cancer. Science. 2013; 339(6119):543–8.
Gupta PB, Fillmore CM, Jiang G, Shapira SD, Tao K, Kuperwasser C, Lander ES. Stochastic state transitions give rise to phenotypic equilibrium in populations of cancer cells. Cell. 2011; 146(4):633–44.
de Bruin EC, McGranahan N, Mitter R, Salm M, Wedge DC, Yates L, Jamal-Hanjani M, Shafi S, Murugaesu N, Rowan AJ, Gronroos E, Muhammad MA, Horswell S, Gerlinger M, Varela I, Jones D, Marshall J, Voet T, Van Loo P, Rassl DM, Rintoul RC, Janes SM, Lee SM, Forster M, Ahmad T, Lawrence D, Falzon M, Capitanio A, Harkins TT, Lee CC, Tom W, Teefe E, Chen SC, Begum S, Rabinowitz A, Phillimore B, Spencer-Dene B, Stamp G, Szallasi Z, Matthews N, Stewart A, Campbell P, Swanton C. Spatial and temporal diversity in genomic instability processes defines lung cancer evolution. Science. 2014; 346(6206):251–6.
Zhang J, Fujimoto J, Zhang J, Wedge DC, Song X, Zhang J, Seth S, Chow CW, Cao Y, Gumbs C, Gold KA, Kalhor N, Little L, Mahadeshwar H, Moran C, Protopopov A, Sun H, Tang J, Wu X, Ye Y, William WN, Lee JJ, Heymach JV, Hong WK, Swisher S, Wistuba II, Futreal PA. Intratumor heterogeneity in localized lung adenocarcinomas delineated by multiregion sequencing. Science. 2014; 346(6206):256–9.
Mumenthaler SM, Foo J, Choi NC, Heise N, Leder K, Agus DB, Pao W, Michor F, Mallick P. The Impact of Microenvironmental Heterogeneity on the Evolution of Drug Resistance in Cancer Cells. Cancer Inform. 2015; 14(Suppl 4):19–31.
Wodarz D, Komarova N. Can loss of apoptosis protect against cancer?. Trends Genet. 2007; 23(5):232–7.
Meyer C, Drexler HG. FLT3 ligand inhibits apoptosis and promotes survival of myeloid leukemia cell lines. Leuk Lymphoma. 1999; 32(5-6):577–81.
Zheng R, Levis M, Piloto O, Brown P, Baldwin BR, Gorin NC, Beran M, Zhu Z, Ludwig D, Hicklin D, Witte L, Li Y, Small D. FLT3 ligand causes autocrine signaling in acute myeloid leukemia cells. Blood. 2004; 103(1):267–74.
Poh AR, O'Donoghue RJ, Ernst M. Hematopoietic cell kinase (HCK) as a therapeutic target in immune and cancer cells. Oncotarget. 2015; 6(18):15752–71.
Saito Y, Yuki H, Kuratani M, Hashizume Y, Takagi S, Honma T, Tanaka A, Shirouzu M, Mikuni J, Handa N, Ogahara I, Sone A, Najima Y, Tomabechi Y, Wakiyama M, Uchida N, Tomizawa-Murasawa M, Kaneko A, Tanaka S, Suzuki N, Kajita H, Aoki Y, Ohara O, Shultz LD, Fukami T, Goto T, Taniguchi S, Yokoyama S, Ishikawa F. A pyrrolo-pyrimidine derivative targets human primary AML stem cells in vivo. Sci Transl Med. 2013; 5(181):181–52.
Poh AR, Love CG, Masson F, Preaudet A, Tsui C, Whitehead L, Monard S, Khakham Y, Burstroem L, Lessene G, Sieber O, Lowell C, Putoczki TL, O'Donoghue RJJ, Ernst M. Inhibition of Hematopoietic Cell Kinase Activity Suppresses Myeloid Cell-Mediated Colon Cancer Progression. Cancer Cell. 2017; 31(4):563–75.
O'Leary B, Finn RS, Turner NC. Treating cancer with selective CDK4/6 inhibitors. Nat Rev Clin Oncol. 2016; 13(7):417–30.
Placke T, Faber K, Nonami A, Putwain SL, Salih HR, Heidel FH, Kramer A, Root DE, Barbie DA, Krivtsov AV, Armstrong SA, Hahn WC, Huntly BJ, Sykes SM, Milsom MD, Scholl C, Frohling S. Requirement for CDK6 in MLL-rearranged acute myeloid leukemia. Blood. 2014; 124(1):13–23.
Hernandez Maganhi S, Jensen P, Caracelli I, Zukerman Schpector J, Frohling S, Friedman R. Palbociclib can overcome mutations in cyclin dependent kinase 6 that break hydrogen bonds between the drug and the protein. Protein Sci. 2017; 26(4):870–9.
Leung AY, Man CH, Kwong YL. FLT3 inhibition: a moving and evolving target in acute myeloid leukaemia. Leukemia. 2013; 27(2):260–8.
Weisberg E, Banerji L, Wright RD, Barrett R, Ray A, Moreno D, Catley L, Jiang J, Hall-Meyers E, Sauveur-Michel M, Stone R, Galinsky I, Fox E, Kung AL, Griffin JD. Potentiation of antileukemic therapies by the dual PI3K/PDK-1 inhibitor, BAG956: effects on BCR-ABL- and mutant FLT3-expressing cells. Blood. 2008; 111(7):3723–34.
Agrawal V, Kishan KV. Promiscuous binding nature of SH3 domains to their target proteins. Protein Pept Lett. 2002; 9(3):185–93.
Herrera-Abreu MT, Palafox M, Asghar U, Rivas MA, Cutts RJ, Garcia-Murillas I, Pearson A, Guzman M, Rodriguez O, Grueso J, Bellet M, Cortes J, Elliott R, Pancholi S, Baselga J, Dowsett M, Martin LA, Turner NC, Serra V. Early Adaptation and Acquired Resistance to CDK4/6 Inhibition in Estrogen Receptor-Positive Breast Cancer. Cancer Res. 2016; 76(8):2301–13.
Sherr CJ, Beach D, Shapiro GI. Targeting CDK4 and CDK6: From Discovery to Therapy. Cancer Discov. 2016; 6(4):353–67.
Milo R, Jorgensen P, Moran U, Weber G, Springer M. BioNumbers–the database of key numbers in molecular and cell biology. Nucleic Acids Res. 2010; 38(Database issue):750–3.
Hill AV. The possible effect of the aggregation of the molecules of haemoglobin on its dissociation curves. J Physiol. 1910; 40:4–7.
Cheng Z, Liu F, Zhang XP, Wang W. Robustness analysis of cellular memory in an autoactivating positive feedback system. FEBS Lett. 2008; 582(27):3776–82.
Song H, Smolen P, Av-Ron E, Baxter DA, Byrne JH. Dynamics of a minimal model of interlocked positive and negative feedback loops of transcriptional regulation by cAMP-response element binding proteins. Biophys J. 2007; 92(10):3407–24.
Galassi M, Davies J, Theiler J, Gough B, Jungman G, Alken P, Booth M, Rossi F. GNU Scientific Library Reference Manual, 3rd edn. ISBN 0954612078: United Kingdom: Network Theory Limited; 2009.
Novak B, Tyson JJ. Design principles of biochemical oscillators. Nat Rev Mol Cell Biol. 2008; 9(12):981–91.
Jeong H, Tombor B, Albert R, Oltvai ZN, Barabasi AL. The large-scale organization of metabolic networks. Nature. 2000; 407(6804):651–4.
Albert R, Jeong H, Barabasi AL. Error and attack tolerance of complex networks. Nature. 2000; 406(6794):378–82.
This work was supported by The Swedish Cancer Society (Cancerfonden), project number CAN 2015/387 to RF. The funder did not have any role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.
The datasets generated and/or analysed during the current study are available in the Figshare repository (DOI: 10.6084/m9.figshare.5472754).
Department of Chemistry and Biomedical Sciences, Linnæus University, Norra vägen 49, Kalmar, SE-391 82, Sweden
Antoine Buetti-Dinh & Ran Friedman
Linnæus University Centre for Biomaterials Chemistry, Linnæus University, Norra vägen 49, Kalmar, SE-391 82, Sweden
Centre for Ecology and Evolution in Microbial Model Systems, Linnæus University, Landgången 3, Kalmar, SE-391 82, Sweden
Antoine Buetti-Dinh
Institute of Computational Science, Faculty of Informatics, Università della Svizzera Italiana, Via Giuseppe Buffi 13, Lugano, CH-6900, Switzerland
Swiss Institute of Bioinformatics, Quartier Sorge – Batiment Genopode, Lausanne, CH-1015, Switzerland
Ran Friedman
ABD carried out the simulations, performed the analysis of the data and drafted the initial manuscript. RF initiated, supervised the project and participated in the data analysis. ABD and RF wrote the manuscript. Both authors read and approved the final manuscript.
Correspondence to Ran Friedman.
Supplementary Material. Sensitivity profiles and signal flow graphs (Supplementary Figures 1–4). Fine-grained simulations heat maps (Supplementary Figures 5–6). PCA and hierarchical clustering at different levels of PI3K (Supplementary Figures 7–9). Model parameters (Supplementary Table 1). (PDF 3213 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Buetti-Dinh, A., Friedman, R. Computer simulations of the signalling network in FLT3 +-acute myeloid leukaemia – indications for an optimal dosage of inhibitors against FLT3 and CDK6. BMC Bioinformatics 19, 155 (2018). https://doi.org/10.1186/s12859-018-2145-y
Knowledge-based analysis
|
CommonCrawl
|
MOTANA: study protocol to investigate motor cerebral activity during a propofol sedation
Sébastien Rimbert ORCID: orcid.org/0000-0002-3314-72311,
Denis Schmartz2,
Laurent Bougrain1,
Claude Meistelman3,
Cédric Baumann5 &
Philippe Guerci4,3
Accidental Accidental awareness during general anesthesia (AAGA) occurs in 1–2% of high-risk practice patients and is a cause of severe psychological trauma, termed post-traumatic stress disorder (PTSD). However, no monitoring techniques can accurately predict or detect AAGA. Since the first reflex for a patient during AAGA is to move, a passive brain-computer interface (BCI) based on the detection of an intention of movement would be conceivable to alert the anesthetist. However, the way in which propofol (i.e., an anesthetic commonly used for the general anesthesia induction) affects motor brain activity within the electroencephalographic (EEG) signal has been poorly investigated and is not clearly understood. For this reason, a detailed study of the motor activity behavior with a step-wise increasing dose of propofol is required and would provide a proof of concept for such an innovative BCI. The main goal of this study is to highlight the occurrence of movement attempt patterns, mainly changes in oscillations called event-related desynchronization (ERD) and event-related synchronization (ERS), in the EEG signal over the motor cortex, in healthy subjects, without and under propofol sedation, during four different motor tasks.
MOTANA is an interventional, prospective, exploratory, physiological, monocentric, and randomized study conducted in healthy volunteers under light anesthesia, involving EEG measurements before and after target-controlled infusion of propofol at three different effect-site concentrations (0 μg.ml −1, 0.5 μg.ml −1, and 1.0 μg.ml −1). In this exploratory study, 30 healthy volunteers will perform 50 trials for the four motor tasks (real movement, motor imagery, motor imagery with median nerve stimulation, and median nerve stimulation alone) in a randomized sequence. In each conditions and for each trial, we will observe changes in terms of ERD and ERS according to the three propofol concentrations. Pre- and post-injection comparisons of propofol will be performed by paired series tests.
MOTANA is an exploratory study aimed at designing an innovative BCI based on EEG-motor brain activity that would detect an attempt to move by a patient under anesthesia. This would be of interest in the prevention of AAGA.
Agence Nationale de Sécurité du Médicament (EUDRACT 2017-004198-1), NCT03362775. Registered on 29 August 2018. https://clinicaltrials.gov/ct2/show/NCT03362775?term=03362775&rank=1
Every year, several hundred million surgeries are performed worldwide [1]. Among these surgical procedures, 0.1–0.2% of patients are victims of an accidental awareness during general anesthesia (AAGA), i.e., an unexpected awakening of the patient during a surgical procedure under general anesthesia [2–4]. The estimated number of AAGAs can increase up to 1% in high-risk practice patients, despite apparently appropriate anesthesia administration [4–6]. Considering the high occurrence of this phenomenon, new solutions need to be found to solve this issue.
Beyond the terrifying aspect of this experience, AAGA can cause severe psychological trauma, called post-traumatic stress disorder (PTSD) [7]. Seventy percent of AAGA events are complicated with PTSD, resulting in many negative repercussions on the victim's life: anxiety, increased risk of suicide, insomnia, chronic fear, flashbacks, and lack of confidence in the medical staff [2, 3, 8, 9]. In addition, AAGAs may also generate legal claims that induce an additional cost for the hospital [2, 6].
Unfortunately, there are currently no satisfactory monitoring solutions sufficient to evaluate the depth of general anesthesia and detect intraoperative awakening [10, 11]. The anesthesiologist's observation of clinical signs is not sufficient to prevent an AAGA during surgery [12]. New indexes using part of the cortical frontal electroencephalographic (EEG) signal (e.g., Bispectral Index, Entropy, Patient State Index) have failed to demonstrate their reliability and superiority [5, 13–15]. Indeed, current brain monitors available for intraoperative analysis of the cortical frontal EEG may not adequately reflect an attempt of movement from the patient under anesthesia, especially if paralytics (neuromuscular blocker agents) are used.
During an AAGA, there is evidence that the patient's first reflex is to move to warn the surgical team [16]. However, patients may desperately attempt to move, without success [17] and perform a so-called intention to move. A motor imagery (MI) can be detected by recording the EEG signal over the motor cortex such as in the brain-computer interface (BCI) domain [18, 19]. Indeed, the mu (7–13 Hz) and beta (15–30 Hz) sensorimotor rhythms are characterized, before and during a MI, by a gradual decrease of power in mainly the alpha (mu) and beta bands, and after the end of the MI, by an increase of power in the beta band. These modulations are respectively known as event-related desynchronization (ERD) and event-related synchronization (ERS) or post-movement beta rebound [20–22].
If a BCI based on the MI of a patient seems feasible [23], the impact of propofol on the EEG activity, especially over the motor cortex, is still not yet fully understood. In 2016, Blokland et al. studied the effect of propofol on voluntary subjects who performed movements according to sound beeps while an increasing dosage of anesthetic was administered to them [24]. The authors described the impact of propofol on the EEG signal and showed how the BCI domain could contribute to the issue of AAGAs, but they emphasized that this approach is not a realistic situation, since the patient is explicitly asked to perform a movement. To address this issue, we have shown in a previous article that a frequent stimulation of the median nerve is a very promising approach [23]. Indeed, previous studies have shown that a painless stimulation of the median nerve induces an ERD during the stimulation while an ERS appears after the stimulation [25, 26]. Our recent study showed that a MI was more effectively detected using a median nerve stimulation (MNS). This result highlights promising classification results that would allow us to create a reliable device which can be used in the operating room. Therefore, we can imagine a routine system where the median nerve of the patient would be stimulated, and the analysis of ERD and ERS modulations of the motor cortex would be used to know if the patient has an intention to move. However, this study was conducted without propofol, and the previous results obtained need to be confirmed. The study we propose in this clinical protocol will provide the missing answers to these questions and provide insights into the design of such a BCI.
The main objective is to verify that ERD and ERS patterns can be detected in the cortical motor EEG signal under light general anesthesia conditions according to three different concentrations of propofol at the effect site (0 μg.ml −1, 0.5 μg.ml −1, and 1.0 μg.ml −1) during four different motor tasks (i.e., real movement, motor imagery, motor imagery and MNS, and MNS alone) in a randomized sequence. The secondary objective is to describe how the ERD and ERS generated by an MNS would be modulated according to three different concentrations of propofol. In addition, a combination of MNS and an intention to move will be studied to verify the hypotheses discussed in our previous article [23]. Finally, the forward-looking goal is a translational research project that will allow the development of a new monitoring device for the detection of intraoperative awareness.
Each voluntary subject recruited for the study will benefit from a pre-operative anesthetic assessment between 1 and 30 days before the experiment, performed by a trained anesthesiologist (PG). Only subjects exhibiting an American Society of Anesthesiologists (ASA) status of 1 with inclusion criteria (see below) will be eligible. Exclusion criteria are also listed below. On the day of the experiment, subjects should have fasted for 6 h for solids and 2 h for clear liquids. The experiment will be held in an approved location by the Agence Regionale de Santé (n∘2017-2500), in the Surgical Intensive Care Unit JM Picard, Department of Anesthesiology and Critical Care Medicine, University Hospital of Nancy-Brabois, France. In addition to the EEG cap, all of the volunteer subjects will be asked to rest in a semirecumbent (15∘) supine position and will be continuously monitored with electrocardiography (ECG), a non-invasive blood pressure measurement (NIBP), a pulse oximetry (SpO2) (GE Healthcare, Aulnay-sous-Bois, France), and an oxygen supplement delivered by nasal cannula (2 l.min −1). A 24G peripheral catheter (BD Insyte Autogard, Becton Dickinson, France) will be inserted in the left forearm and continuously infused with a crystalloid solution (Isofundine Ⓡ, B. Braun, Melsungen, AG, Germany). Finally, the subject will be infused with propofol LIPURO 1% (10 mg.ml −1, B. Braun, Melsungen, AG, Germany) using a target-controlled infusion pump with Schnider pharmacokinetic model (B. Braun Perfusor, B. Braun, Melsungen, AG, Germany) at the effect site. During the first session of the experiment, no infusion of propofol will be performed (0 μg.ml −1). Anesthesia will be induced by an experienced staff anesthesiologist in charge of the study (PG). Intravenous anesthesia will be discontinued if the voluntary subject experiences a loss of consciousness.
Three sessions will be conducted without any anesthetic medications and a step increase of propofol concentration at the effect site (brain): 0 μg.ml −1, 0.5 μg.ml −1, and 1 μg.ml −1.
For each concentration of propofol, the cortical motor EEG signal will be recorded in several sessions corresponding to the four different motor tasks: during a right-hand voluntary real movement (RM), during a right-hand kinesthetic MI, during a right-hand MNS, and during a right-hand kinesthetic MI followed by a right-hand MNS (MI + MNS). The motor tasks will be performed in a random order according to a computer-generated randomization table. Each motor task will be composed of 50 trials. The four motor tasks will be randomized for each subject in order to avoid fatigue, gel drying, or other confounding factors that might have caused possible biases in the results. At the beginning of each run, the subject will remain relaxed for 15 s. Subjects will be asked to keep their eyes closed (Fig. 1). Figure 2 is the flowchart of the experimental scheme. Figure 3 is the schedule of enrollment, interventions, and assessments. The Standard Protocol Items: Recommendations for Interventional Trials (SPIRIT) checklist is provided as Additional file 1.
Paradigm scheme. Paradigm scheme illustrating the organization of the different sessions, tasks, and trials of the study. The study contains 3 conditions of propofol's concentrations (0 μg.ml −1, 0.5 μg.ml −1, and 1.0 μg.ml −1). For each concentration, 4 motor tasks will be studied (real movement, motor imagery (MI), median nerve stimulation (MNS), and MI + MNS). All motor tasks will start with a sound beep and be followed by a resting state. 50 trials per motor task will be performed
Flowchart of experimental scheme. The healthy volunteer subject will be lying down and fitted with an EEG helmet with 128 sensors, 32 of which will be placed at the level of the motor cortex. The signals will be recorded using the OpenViBE software. The infusion of propofol will be performed by an infusion pump with a concentration target
Schedule of enrollment, interventions, and assessments
Each trial will be conducted with the following steps: (1) a sound beep indicating to the patient that he/she must perform the action, (2) the action (RM or MI), and (3) a resting state period of a few seconds. During the MNS motor task, the delay will be shorter than 1 s because the stimulation will be very short (0.1 ms) and the motor patterns (ERD and ERS) will also be generated in the EEG. The right-hand median nerve will be stimulated in the same way as for measurement with a conduction velocity or for an evoked potential. The stimulation is a painless transcutaneous stimulation using the specific Micromed device Sd Ltm Stim Energy (Micromed, Mâcon, France). The stimulus intensity will range between 3 and 14 mA. The stimulation duration will be 0.1 ms with a frequency of 5 Hz. We will put the two stimulation electrodes on the right-hand wrist according to the standards [25, 28].
After all three sessions have been completed, the subject will have a 30-min rest period. Then, the subject will complete a 10-min post-experimental questionnaire. Then he/she will complete two surveys of street abilities to assess the subject's ability to leave the hospital safely: the Chung questionnaire (Appendix 3) and the Aldrete questionnaire (Appendix 4). These two evaluation criteria will be validated by the scientific community and are commonly used for outpatient surgery. After completing the study, the subject will be contacted over the phone 24 h after the experiment by the principal investigator to ensure that everything is fine.
In this clinical protocol, we have chosen to conduct the experiment on 30 healthy volunteers and not on patients. To ensure the completion of the study, we have designed a flyer that will be distributed on social networks. Participation in the study will also be compensated up to 80 euros. Indeed, in the case of patients, this would disrupt the functioning and organization of planned operations within the hospital. In addition, healthy volunteers will be able to be trained for the task much more easily than would be the case for patients going through surgery. Healthy volunteers will be exposed to the well-known risks of limited general anesthesia with propofol because the dosages used in this clinical protocol are lower than those that induce a loss of consciousness in healthy subjects. The doses have been determined in order to prevent the subjects from being at any risk of loss of consciousness, which will allow them to perform the required motor tasks properly. Finally, the study population will only be male, because anesthesia should be avoided for pregnant women, and the detection of pregnancy would complicate the inclusion of subjects and generate additional costs. Female volunteers are initially excluded from the study due to the extra costs generated by the need for pregnancy tests (blood tests) not budgeted. However, in case of significant results in this preliminary study, female volunteers will be subsequently included to confirm the initial findings and make the results more generalizable. Moreover, this study only concerns right-handed people, as the literature has shown significant differences between left-handed and right-handed people. The healthy volunteers will be remunerated with 80 euros in the form of a purchase voucher.
To be included in the study, a person must:
Have received full details of the research organization and signed our informed consent;
Be aged between 18 and 28 years old;
Be a man;
Have a body mass index between 22 and 28;
Be right-handed;
Have carried out a clinical investigation adapted before the research is carried out;
Be affiliated to a social security regime.
The study will exclude a person who:
Is allergic to any of the ingredients in propofol (in particular, soybean oil and egg);
Has a known allergy to propofol;
Has a history of an anaphylactic reaction during anesthesia;
Is a female, due to the impossibility of checking pregnancy status;
Is a pregnant or parturient woman or a breastfeeding mother;
Is deprived of liberty by a judicial or administrative decision;
Is undergoing psychiatric care, admitted to a health or social institution for purposes other than research, subject to a legal protection measure (guardianship, curatorship, protection of justice), or in an emergency situation;
Is an adult who is unable to consent and who is not subject to a legal protection measure;
Has a condition that may interfere with EEG recording (i.e., diabetes, polyneuropathy, epilepsy, depression);
Is a toxic addict.
The randomization table will be generated by the methodologist in charge of the study (CB). The investigator (SR) applies the randomization pre-established by the methodologist using pre-sealed envelopes and will note the sequence order of the tasks performed by the volunteer subject. Indeed, the subject may start with one of the four motor tasks: RM, MI, MNS, or MI+MNS. The experimentation is conducted openly, and the blinding is only for those who analyze the EEG. There is therefore no blind survey possible.
Ethics and trial registration
The study will be conducted in accordance with the principles of the Declaration of Helsinki and the Medical Research Involving Human Subjects Act [29]. This study has been approved by a national ethical committee (Comité de Protection des Personnes Ile de France 1) under number CPPIDF1-2018-ND16. The experiment has also been approved by the Agence Nationale de Sécurité du Médicament (N∘ EUDRACT 2017-004198-15). Finally, the study protocol was registered on ClinicalTrials.gov (NCT03362775). All patients will give written informed consent before study inclusion and randomization. Patient participation is voluntary; the participant can request to stop participation in the study at any time.
EEG data acquisition
EEG signals will be acquired using the OpenViBE platform [30] with a Biosemi Active Two 128-channel EEG system, arranged in the Biosemi's ABC system covering the entire scalp at 2048 Hz. Among all recorded sites, some of the electrodes will be localized around the primary motor cortex, the motor cortex, the somatosensory cortex, and the occipital cortex, which will allow us to observe the physiological changes due to the RM, the kinesthetic MI, and the MNS [25, 26, 31, 32]. An external electromyogram (EMG) electrode will be added in order to verify that there was no movement during the MI task. All offline analyses will be performed using the EEGLAB toolbox [33] and Matlab2016a (The MathWorks Inc., Natick, MA, USA). Considering the large number of electrodes used in this study (128) and the purpose of this research (motor patterns over the motor cortex), we chose to use a common average referencing (CAR) performed using EEGLAB [34, 35]. The results will also be observed by applying a Laplacian filter and a mastoidal re-referencing [36]. Then, the EEG signals will be resampled at 128 Hz and windowed into 9-s epochs corresponding to 1 s before and 5 s after the motor task for each run.
We will compute the ERD/ERS% using the "band power method" [37]:
$$ ERD/ERS\%=\frac{\overline{x^{2}}-\overline{BL^{2}}}{\overline{BL^{2}}}\times{100} \enspace, $$
where \(\overline {x^{2}}\) is the average of the squared signal smoothed using a 250-ms sliding window with a 100-ms shifting step, \(\overline {BL^{2}}\) is the mean of a baseline segment taken at the beginning of the corresponding trial, and ERD/ERS% is the percentage of the oscillatory power estimated for each step of the sliding window. A positive ERD/ERS% indicates a synchronization, whereas a negative ERD/ERS% indicates a desynchronization. This percentage was computed separately for all EEG channels. The EEG signal was filtered in the mu rhythm (7–13 Hz), in the beta band (15–30 Hz), and in the mu+beta band (8–30 Hz) for all subjects using a 4 th-order Butterworth band-pass filter.
ERD and ERS are difficult to observe from the raw EEG signal; an EEG signal expresses the combination of activities from many neuronal sources. We will use the averaging technique to represent the modulation of power of the mu and beta rhythms during the MI, MNS + MI, and MNS conditions since it is considered one of the most effective and accurate techniques used to extract events [20, 38].
For each trial (n = 50), ERD and ERS modulations will be computed in the 8–30 Hz frequency band for the [–2;6]s time window. This time window was chosen according to the ERD occurrence during the motor task (i.e., 2 s for RM, MI, and MI+MNS) and the time required for the ERS to return to the baseline [21]. For each trial, an ERD max and an ERS max will be selected in both their respective time windows ([0;2]s and [4;6]s). For the MNS task, in accordance with the literature [26, 27], the ERD ERS max will be respectively selected in [0.25;0.5]s and [2;4]s after stimulation. An average over the 50 ERDs max and 50 ERSs max will be performed for the motor tasks (MR, MI, MNS, MI+MNS) and each concentration (0 μg.ml −1, 0.5 μg.ml −1, and 1 μg.ml −1). These two values will be compared with each other for each motor task during the three concentrations with a Student's t test (p value <0.05).
The ERD and ERS will also be visualized as event-related spectral perturbations (ERSPs). ERSPs allow one to visualize event-related changes in the average power spectrum relative to a baseline of 1.5 s taken 2 s before the auditory cue for all motor tasks and different propofol concentrations [39]. A surrogate permutation test (p <0.05; 2000 permutations) from the EEGLAB toolbox will be used to validate differences in terms of time-frequency of this ERSPs. In addition to this analysis, we will apply a false discovery rate (FDR) correction test in order to clarify how the FDR will be controlled for multiple comparisons. This test consists of repetitively shuffling values between conditions and recomputing the measure of interest using the shuffled data. It will be performed by drawing data samples without replacement and is considered suitable to show the difference for all motor tasks during different concentrations of propofol [40].
Finally, we will compute the performance of four different classification methods in a fourfold cross-validation scheme. The first one uses a linear discriminant analysis (LDA) classifier trained and evaluated using common spatial pattern (CSP) features generated from the first and last four CSP filters [41] (referred to as CSP+LDA). The CSP method is widely used in the field of MI-base BCI, as it provides a feature projection onto a lower dimensional space that minimizes the variance of one class while maximizing the variance of the other. The other three classifiers are Riemannian geometry-based classification methods. Riemannian geometry-based methods work with the covariance matrices of each trial, which lie on the Riemannian manifold of symmetric positive definite matrices. These features have therefore the advantage of being immune to linear transformations [42] We chose to apply a paired t test (two-sided) to show the significant difference in accuracy obtained for MI versus Rest and MI + MNS versus Rest with the TS + LR classifier (at p values <0.05).
Study duration
For each patient, the duration of participation is 31 days, including a half-day of experimentation on the study. The experimentation on the study visit is scheduled to last approximately 3 h. The duration of the inclusion period is 24 months. The total estimated duration of the research including the time required to analyze the data is 30 months.
The Promoter has the right to interrupt the research at any time if:
The recruitment of subjects is not appropriate;
There are serious deviations in the protocol which have an impact on the statistical analysis of the data;
A major problem concerning the security and rights of the subjects arises;
The competent authority or the ethics committee so requests it.
Data will be collected with the use of the case report form (CRF), which will be prospectively maintained from the time the patient signs the informed consent until the completion of the study. Patient data will be anonymized; each individual will be given a unique study number (first letter of the first name and first letter of the last name, supplemented by a number assigned to the inclusion, in accordance with Reference Methodology MR001). The data monitoring committee is the Direction de la Recherche et de l'Innovation (sponsor) of the University Hospital of Nancy. An independent data monitoring committee is also nominated to assess serious adverse reactions.
The CRF will contain:
Demographic data for each patient;
Adverse events that may occur during the study;
The monitoring data (blood pressure, heart rate, saturation);
EEG data, which will be recorded electronically and stored on a computer remaining in the hospital under the supervision of the investigating anesthetist.
Principal criteria
The main evaluation outcome will be the amplitude of the ERD (event-related desynchronization)/ERS (event-related synchronization) after each motor task, within 2 s of the start signal (beep sound), before and after propofol injection and according to a pre-established increase in doses (0 μg.ml −1, 0.5 μg.ml −1, and 1 μg.ml −1). This amplitude will be calculated using a baseline taken before each task. The amplitudes of the ERD and ERS will be extracted. The ERDs and ERSs will be displayed from the time each power task is performed until 2 s after the task is completed.
Secondary criteria
The secondary evaluation criteria will include a comparison of the ERD/ERS in the three different sessions (without propofol, concentration at 0.5 μg.ml −1, and concentration at 1 μg.ml −1 at the effect site). Another secondary endpoint will be the detection of ERS after MNS. Finally, the last secondary criterion will be the statistical reliability of the detection of ERD/ERS in the primary endpoint coupled with the secondary endpoints.
MOTANA is an exploratory study aimed at designing an innovative BCI-based EEG-motor brain activity and would detect the intention to move of a patient during anesthesia. MOTANA is the first study to analyze the effects of a median nerve stimulation (MNS) in this context.
Getting closer to the anesthetized state
Since the first reflex for a patient during an AAGA is to move, a passive BCI based on the intention of movement is conceivable. Indeed, Blokland et al. have shown the feasibility of such a device [24]. However, the challenge of using such a BCI is that the intention to move from the waking patient is not initiated by a trigger that could be used to guide a classifier. In a previous study, we proposed a new solution based on MNS, which causes specific modulations in the motor cortex and can be altered by an intention of movement. We showed that MNS may provide a foundation for an innovative BCI that would allow the detection of an AAGA [23]. More particularly, we verified that MNS modulates the motor cortex by first generating an ERD during stimulation and then an ERS post-stimulation in voluntary subjects. In addition, we have discovered a new post-stimulation rebound ERS (PSR) which appears 250 ms after the stimulation in the mu and low beta bands. MNS combined with the intention to move, i.e., the MI, had a significant impact on the ERD and ERS generated by the MNS. Indeed, despite the fact that the ERD was unaltered, the PSR was almost abolished and the rebound in the beta band was diminished. Those differences have allowed a classification with highly accurate results. With these findings, we showed that a BCI based on MNS is more effective than a BCI based on a MI state versus rest [23]. Our results will be confirmed during the MOTANA clinical protocol, where the same conditions will be used on voluntary subjects sedated with propofol.
If we can find similar results in propofol-sedated subjects, we will plan to repeat the experimentation on subjects under general anesthesia with neuromusclar blockade in order to study RM intention instead of motor imagination. In a final experiment, we could combine both conditions with paralyzed and anesthetized patients in order to investigate if the combination could change the results.
Getting closer to the implementation
Our other perspective is to create a new way to classify our data online, with either no calibration needed or a very short one. Indeed, we require an easy-to-implement classification in order to make the hypothetical device as practical to use as possible. This also includes work on the number of electrodes required to have good results, less electrodes leading to less preparation time before a surgery. One last thing we want to study is the impact of the MNS at various times during a MI task. During this study we stimulated our subjects at the same time for the entire experimentation (750 ms after the MI task start), but in a real surgery, the MNS would intervene at different times, and the cerebral activity could be modulated differently.
At the time of initial manuscript submission, recruitment had started (November 2018) but has not been completed. The recruitment began on December 2018 and is expected to be completed by December 2019. The current protocol version is version 2 (September 2018) and was registered on ClinicalTrials.gov on 29 August 2018 (NCT03362775).
The datasets generated and/or analyzed during the current study will be available from the corresponding author on request. Records of all patients will be kept separately in a secure place in the CHRU-Brabois hospital.
AAGA:
Accidental awareness during general anesthesia
ASA:
American Society of Anesthesiologists
BCI:
CRF:
Case report form
Cédric Baumann
EEG:
ERD:
Event-related desynchronization
ERS:
Event-related synchronization
EMG:
MNS:
Median nerve stimulation
NIBP:
Non-invasive blood pressure measurement
PG:
Philippe Guerci
PTSD:
Post-traumatric stress disorder
RM:
Real movement
Sébastien Rimbert
Weiser T, Haynes A, Molina G, Lipsitz S, Esquivel M, Uribe-Leitz T, Fu R, Azad T, Chao T, Berry W, Gawande A. Size and distribution of the global volume of surgery in 2012. Bull World Health Organ. 2016; 94(3):201–9.
Pandit JJ, Andrade J, Bogod DG, Hitchman JM, Jonker WR, Lucas N, Mackay JH, Nimmo AF, O'Connor K, O'Sullivan EP, Paul RG, Palmer JHMG, Plaat F, Radcliffe JJ, Sury MRJ, Torevell HE, Wang M, Hainsworth J, Cook TM, Royal College of Anaesthetists, Association of Anaesthetists of Great Britain and Ireland. 5th national audit project (NAP5) on accidental awareness during general anaesthesia: summary of main findings and risk factors. Br J Anaesth. 2014; 113:549–59.
Almeida D. Awake and unable to move: what can perioperative practitioners do to avoid accidental awareness under general anesthesia?J Perioper Pract. 2015; 25(12):257–61.
Sebel P, Bowdle T, Ghoneim M, Rampil I, Padilla R, Gan T, Domino K. The incidence of awareness during anesthesia: a multicenter United States study. Anesth Analg. 2004; 99(3):833–9.
Avidan M, Zhang L, Burnside BA, Finkel KJ, Searleman AC, Selvidge JA, Saager L, Turner MS, Rao S, Bottros M, Hantler C, Jacobsohn E, Evers AS. Anesthesia awareness and the bispectral index. N Engl J Med. 2008; 358(11):1097–108. https://doi.org/10.1056/NEJMoa0707361. PMID: 18337600.
Xu L, Wu A-S, Yue Y. The incidence of intra-operative awareness during general anesthesia in China: a multi-center observational study. Acta Anaesthesiol Scand. 2009; 53(7):873–82. https://doi.org/10.1111/j.1399-6576.2009.02016.x. https://onlinelibrary.wiley.com/doi/pdf/10.1111/j.1399-6576.2009.02016.x.
Osterman JE, Hopper J, Heran WJ, Keane TM, van der Kolk BA. Awareness under anesthesia and the development of posttraumatic stress disorder. Gen Hosp Psychiatry. 2001; 23(4):198–204. https://doi.org/10.1016/S0163-8343(01)00142-6.
Lau K, Matta B, Menon D, Absalom A. Attitudes of anaesthetists to awareness and depth of anaesthesia monitoring in the UK. Eur J Anaesthesiol. 2006; 23(11):921–30.
Bischoff P, Rundshagen I. Awareness under general anesthesia. Dtsch Arztebl Int. 2011; 108(1-2):1–7. https://doi.org/10.3238/arztebl.2011.0001.
Mashour G, Avidan M. Intraoperative awareness: controversies and con-controversies. Br J Anaesth. 2015; 115(1):20–26.
Pandit JJ, Russell IF, Wang M. Interpretations of responses using the isolated forearm technique in general anaesthesia: a debate. Br J Anaesth. 2015; 115:32–45. https://doi.org/10.1093/bja/aev106.
Punjasawadwong PAY, Bunchungmongkol N. Bispectral index for improving anaesthetic delivery and postoperative recovery. Cochrane Database Syst Rev. 2014; 6. https://doi.org/10.1002/14651858.CD003843.pub3.
Kent C, Domino K. Depth of anesthesia. Curr Opin Anaesthesiol. 2009; 22(6):782–7.
Mashour GA, Avidan M. Intraoperative awareness: controversies and non-controversies. Br J Anaesth. 2015; 115:20–6.
Schneider G, Mappes A, Neissendorfer T, Schabacker M, Kuppe H, Kochs E. EEG-based indices of anaesthesia: correlation between bispectral index and patient state index?Eur J Anaesthesiol. 2004; 21(1):6–12. https://doi.org/10.1017/S0265021504001024.
Ghoneim MM, Block RI, Haffarnan M, Mathews MJ. Awareness during anesthesia: risk factors, causes and sequelae: a review of reported cases in the literature. Anesth Analg; 108(2):527–35. https://doi.org/10.1213/ane.0b013e318193c634.
Tasbighou S, Vogels M, Absalom A. Accidental awareness during general anaesthesia — a narrative review. Anaesthesia. 2018; 73(1):112–22.
Jonathan Wolpaw EWW, (ed).Brain-computer interfaces: principles and practice. New York: Oxford University Press; 2012.
Pfurtscheller G, Neuper C. Motor imagery and direct brain-computer communication. Proc IEEE. 2001; 89(7):1123–34. https://doi.org/10.1109/5.939829.
Pfurtscheller G. Induced oscillations in the alpha band: functional meaning. Epilepsia. 2003; 44(12):2–8.
Kilavik BE, Zaepffel M, Brovelli A, MacKay WA, Riehle A. The ups and downs of beta oscillations in sensorimotor cortex. Exp Neurol. 2013; 245:15–26. https://doi.org/10.1016/j.expneurol.2012.09.014.
Hashimoto Y, Ushiba J. EEG-based classification of imaginary left and right foot movements using beta rebound. Clin Neurophysiol. 2013; 124(11):2153–60. https://doi.org/10.1016/j.clinph.2013.05.006.
Rimbert S, Riff P, Gayraud N, Bougrain L. Median nerve stimulation based BCI: a new approach to detect intraoperative awareness during general anesthesia. Front Neurosci; 13:622. https://www.frontiersin.org/article/10.3389/fnins.2019.00622, https://doi.org/10.3389/fnins.2019.00622.
Blokland Y, Farquhar J, Lerou J, Mourisse J, Scheffer GJ, van Geffen G-J, Spyrou L, Bruhn J. Decoding motor responses from the EEG during altered states of consciousness induced by propofol. J Neural Eng. 2016; 13(2):026014.
Schnitzler A, Salenius S, Salmelin R, Jousmaki V, Hari R. Involvement of primary motor cortex in motor imagery: a neuromagnetic study. Neuroimage. 1997; 6(3):201–8. https://doi.org/10.1006/nimg.1997.0286.
Salenius S, Schnitzler A, Salmelin R, Jousmäki V, Hari R. Modulation of human cortical rolandic rhythms during natural sensorimotor tasks. NeuroImage. 1997; 5(3):221–8. https://doi.org/10.1006/nimg.1997.0261.
Rimbert S, Philippe G, Nathalie G, Claude M, Laurent B. Innovative Brain-Computer Interface based on motor cortex activity to detect accidental awareness during general anesthesia. In: IEEE Internation Conference on Systems, Man and Cybernetics, IEEE SMC 2019 - IEEE International Conference on Systems, Man, and Cybernetics. Bari: 2019. https://hal.inria.fr/hal-02166934.
Kumbhare D, Robinson L, Buschbacher R. Median nerve to the abductor pellicis brevis In: Kumbhare D, Robinson L, Buschbacher R, editors. Buschbacher's manual of nerve conduction studies, 3rd edition. Demos Medical Publishing New York: 2016. p. 10.
World Medical Association. World Medical Association Declaration of Helsinki: ethical principles for medical research involving human subjects. J Postgrad Med. 2002; 48(3):206–8. KIE: KIE Bib: human experimentation.
Renard Y, Lotte F, Gibert G, Congedo M, Maby E, Delannoy V, Bertrand O, Lécuyer A. OpenViBE: an open-source software platform to design, test and use brain-computer interfaces in real and virtual environments. Presence. 2010; 10:35–53. https://doi.org/10.1162/pres.19.1.35.
Filgueiras A, Quintas Conde E, Hall C. The neural basis of kinesthetic and visual imagery in sports: an ALE meta-analysis. Brain Imaging Behav. 2017. https://doi.org/10.1007/s11682-017-9813-9.
Guillot A, Collet C, Nguyen VA, Malouin F, Richards C, Doyon J. Brain activity during visual versus kinesthetic imagery: an FMRI study. Hum Brain Mapp. 2009; 30(7):2157–72. https://doi.org/10.1002/hbm.20658.
Delorme A, Makeig S. EEGLAB: an open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J Neurosci Methods. 2004; 134(1):9–21.
J.Dien. Issues in the application of the average reference: review, critiques, and recommendations. Behav Res Methods. 1998; 30:34.
Lei X, Liao K. Understanding the influences of EEG reference: a large-scale brain network perspective. Front Neurosci. 2017; 11:205.
Perrin F, Pernier J, Betrand O, Echallier J. Spherical splines for scalp potential and current density mapping. Electroencephalogr Clin Neurophysiol. 1989; 72(2):184–7.
Pfurtscheller G, Lopes da Silva FH. Event-related EEG/MEG synchronization and desynchronization: basic principles. Clin Neurophysiol. 1999; 110(11):1842–57.
Quiroga RQ, Garcia H. Single-trial event-related potentials with wavelet denoising. Clin Neurophysiol. 2003; 114(2):376–90.
Brunner C, Delorme A, Makeig S. EEGLAB — an open source MATLAB toolbox for electrophysiological research. Biomed Tech. 2013; 58:9–21. https://doi.org/10.1515/bmt-2013-4182.
Manly B. Randomization, bootstrap and Monte Carlo methods in biology In: Champman & Hall/CRC, editor. The generation of random permutations. Boca Raton: FL: Chapman & Hall/CRC: 2006.
Blankertz B, Tomioka R, Lemm S, Kawanaba M, Müller KR. Optimizing spatial filters for robust EEG single-trial analysis. IEEE Signal Proc Mag. 2008; 25:41Ű56. https://doi.org/10.1109/MSP.2008.4408441.
Barachant A, Bonnet S, Congedo M, Jutten C. Riemannian geometry applied to BCI classification. In: International Conference on Latent Variable Analysis and Signal Separation. Saint-Malo: Springer: 2010. p. 629–36.
The sponsor was CHRU of Nancy (Direction de la Recherche et de l'Innovation). The authors thank Marjorie Starck and Ludivine Odoul for their help during the writing of the MOTANA clinical protocol. The authors of this paper acknowledge their gratitude to the participants in this research project.
No funding was received for conducting this trial. This study will be realized as part of Sébastien Rimbert's thesis (i.e., funded by the Inria, LORIA, Université de Lorraine, urban community of Nancy and Region Grand-Est). The EEG acquisition equipment was donated by the Neurosys team (LORIA laboratory-CNRS, Inria, Université de Lorraine). The propofol and equipment for the intervention were donated by the university hospital of Nancy-Brabois.
Université de Lorraine, Inria, LORIA, Neurosys team, 615 rue du Jardin Botanique, Vandoeuvre-lès-Nancy, France
Sébastien Rimbert & Laurent Bougrain
CHU Brugmann, Université Libre de Bruxelles, Place A.Van Gehuchten 4, Bruxelles, 1020, Belgium
Denis Schmartz
Department of Anesthesiology and Critical Care Medicine, Universisty Hospital of Nancy, 9 Avenue de la Forêt de Haye, Vandoeuvre-lès-Nancy, 54500, France
Claude Meistelman & Philippe Guerci
INSERM, U1116, Université de Lorraine, 615 rue du Jardin Botanique, Vandoeuvre-lès-Nancy, France
CHRU Nancy, plateforme d'aide à la recherche clinique, UMDS, Vandoeuvre-lès-Nancy, 54500, France
Laurent Bougrain
Claude Meistelman
SR conceived the idea and rationale for this study. PG is the principal investigator of this study. SR, PG, DS, CB, LB, and CM contributed to the design and protocol of this study. SR and PG coordinate the enrollment of participants. SR and PG are responsible for the collection and analysis of the data. SR, PG, DS, CB, LB, and CM were responsible for drafting and critically revising the manuscript. All authors have read and approved the final version of this manuscript.
Correspondence to Sébastien Rimbert.
The study will be conducted in accordance with the principles of the Declaration of Helsinki and the Medical Research Involving Human Subjects Act [29]. This study has been approved by a national ethical committee (Comité de Protection des Personnes Ile de France 1) under number CPPIDF1-2018-ND16. The experiment has also been approved by the Agence Nationale de Sécurité du Médicament (N∘ EUDRACT 2017-004198-15). Finally, the study protocol was registered on ClinicalTrials.gov (NCT03362775). All participants provide written informed consent before participating in the trial or extension study.
Consent for publication will be obtained from each participant prior to participation.
SPIRIT 2013 checklist: recommended items to address in a clinical trial protocol and related documents. (DOCX 144 kb)
Rimbert, S., Schmartz, D., Bougrain, L. et al. MOTANA: study protocol to investigate motor cerebral activity during a propofol sedation. Trials 20, 534 (2019). https://doi.org/10.1186/s13063-019-3596-9
Intraoperative awareness
Electrocencephalography
|
CommonCrawl
|
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript.
View all journals
communications earth & environment
Process-oriented analysis of aircraft soot-cirrus interactions constrains the climate impact of aviation
Bernd Kärcher ORCID: orcid.org/0000-0003-0278-49801,
Fabian Mahrt ORCID: orcid.org/0000-0002-7059-67652,3 &
Claudia Marcolli ORCID: orcid.org/0000-0002-9125-87224
Communications Earth & Environment volume 2, Article number: 113 (2021) Cite this article
Climate-change mitigation
Fully accounting for the climate impact of aviation requires a process-level understanding of the impact of aircraft soot particle emissions on the formation of ice clouds. Assessing this impact with the help of global climate models remains elusive and direct observations are lacking. Here we use a high-resolution cirrus column model to investigate how aircraft-emitted soot particles, released after ice crystals sublimate at the end of the lifetime of contrails and contrail cirrus, perturb the formation of cirrus. By allying cloud simulations with a measurement-based description of soot-induced ice formation, we find that only a small fraction (<1%) of the soot particles succeeds in forming cloud ice alongside homogeneous freezing of liquid aerosol droplets. Thus, soot-perturbed and homogeneously-formed cirrus fundamentally do not differ in optical depth. Our results imply that climate model estimates of global radiative forcing from interactions between aircraft soot and large-scale cirrus may be overestimates. The improved scientific understanding reported here provides a process-based underpinning for improved climate model parametrizations and targeted field observations.
Aviation contributes to anthropogenic radiative forcing (RF) by changing cirrus cloudiness, thereby affecting Earth's energy budget and climate1. Aircraft-induced clouds (AIC) that originate from emitted soot particles include young, line-shaped contrails and the older, irregular-shaped contrail cirrus evolving from them2. AIC yield the largest aviation-induced RF, and effective RF, followed by aviation carbon dioxide emissions2,3. Significant progress has been made in quantifying AIC RF from process-based representations of these man-made ice clouds in global climate models, with a best estimate of 111 mW m−2 for air traffic in the year 2018 3. This estimate does not include the impact of contrails forming within already-existing cirrus4 and is expected to increase strongly in the next decades5, assuming that aviation activity reverts to projected growth rates after the COVID-19 pandemic6. By contrast, the role of aircraft soot particles in the formation of new cirrus proves difficult to establish with confidence. Estimates of RF from aircraft soot–cirrus interactions are highly varied with large positive and negative values of magnitudes that can exceed the total aviation RF best estimate of 149.1 mW m−2 (ref. 3), because a detailed understanding of how aircraft soot particles nucleate ice crystals is lacking. If one of these large RF values were to be substantiated, the perception of the role of aviation in the climate system would be substantially altered.
An extensive field campaign did not detect a unique aviation signature in cirrus formation, but could not rule out that cirrus formed on nuclei originating from aircraft exhaust7. Aircraft measurements ascribed cirrus ice crystal number concentrations (ICNCs) and depolarisation ratios to aircraft soot impacts, but only provided correlational evidence8,9. A cloud model study suggested that aircraft-emitted soot particles present in number concentrations below 100 L−1 modify cirrus ICNCs and coverage under the assumption that they are efficient ice-nucleating particles (INPs)10. Global climate model studies have led to a wide range of RF estimates for aircraft soot–cirrus interactions, currently from approximately −330 to 287 mW m−2 normalised to 2018 air traffic3. This range persists despite some older estimates being superseded3. Some models assume that aircraft-emitted soot particles frozen in contrails act as good INPs in large-scale cirrus formation, while others, assuming that only a small fraction of all aircraft soot particles is capable of nucleating ice, led to much smaller RF values of 11–13 mW m−2 (ref. 3). Consequently, best RF estimates remain undetermined. This represents a severe gap in the knowledge of how aviation contributes to climate change.
While soot particles are poor INPs at mixed-phase cloud temperatures (>238 K)11,12, some studies showed enhanced ice nucleation activity at cirrus temperatures (<233 K)13,14. This behaviour has been explained by pore condensation and freezing, PCF15,16. Contrail processing—the formation of contrail ice crystals on aircraft-emitted soot particles followed by full ice crystal sublimation and release of the soot particles to the atmosphere—results in compacted soot aggregates. A recent laboratory study demonstrated that the ice activity of soot particles is significantly enhanced upon contrail processing17. This enhancement was attributed to increased porosity of soot aggregates after compaction, facilitating PCF.
Contrail processing occurs in long-lived and short-lived AIC. Long-lived AIC evolve over many hours in ice-supersaturated regions2. The formation of long-lived AIC is constrained by the occurrence frequency of upper tropospheric ice supersaturation with a multi-year average of about 20–30% over the North Atlantic region18. Short-lived contrails dissipate a few minutes past formation and are mainly found in low latitudes19. They appear more frequently than long-lived AIC as they form in ice-subsaturated air, and are usually not considered in global climate models because their contribution to AIC RF is negligible2.
The wide range of values and the resulting low confidence attributed to RF estimates from aircraft soot–cirrus interactions3 underscores the importance to critically assess how aircraft soot emissions impact cirrus formation. Here we employ a state-of-the-art, high-resolution cirrus column model20 and combine it with a novel, PCF-based parametrization describing the ice nucleation activity of contrail-processed soot particles21. Clarifying the ice nucleation activity of aircraft-emitted soot particles resolves a long-standing knowledge gap and allows us to unravel the microphysical mechanisms underlying associated cirrus perturbations on the process level. We intentionally overestimate factors controlling aircraft soot–cirrus interactions to establish an upper-limit impact that may serve to constrain the magnitude of associated RF estimates.
Figure 1 illustrates our conceptual framework. A contrail dissipates at plume age td, defined as the time elapsed after formation (lifetime), releasing the processed soot particles upon which it originally formed. Number concentrations of soot particles and ice crystals decrease with increasing plume dilution caused by entrainment of ambient air. We distinguish between cases where the soot particles participate in the formation of either a new contrail at time tp > td or contrail cirrus at a later time tb > tp. We define cirrus perturbations as plume scenarios (Plume), if the exhaust plume containing these particles is relatively close to the source aircraft and therefore still line-shaped, and if the number concentration of ice-active soot particles exceeds that of INP in the background upper troposphere, which amounts to some 10 L−1 (ref. 22). Otherwise, we define cases as background scenarios (Background). Setting tp = 0.5 h and tb = 5 h, we derive soot particle number concentrations of 5448 and 458 L−1 (Supplementary Table 1), and associated upper-limit estimates of ice-active soot particle number concentrations of 54 and 5 L−1 for scenarios Plume and Background, respectively (see 'Methods' section and Supplementary Figs. 1–3).
Fig. 1: Storyline of aircraft soot–cirrus interactions.
Contrail ice crystals form in aircraft exhaust plumes closely behind the jet engines at cruise altitudes. Plume dilution decreases soot particle and ice crystal number concentrations over time. The contrail dissipates completely after time td and full sublimation of all ice crystals releases all contrail-processed soot particles. After a timespan (tp − td) or (tb − td), the air is lifted (red arrows), generating ice supersaturation. Ice crystals form via soot-PCF (green arrows and upper inset) in addition to homogeneous freezing of liquid solution droplets (lower inset), modulating ice formation in a new line-shaped cirrus cloud similar to a contrail (scenario Plume) or an irregularly shaped cirrus cloud similar to contrail cirrus (Background). The scenarios consider the number concentrations of ice-active aircraft soot particles that encompass the range of INP values observed in the background upper troposphere. In the unperturbed scenario (Base), new cirrus forms solely by homogeneous freezing. All clouds form in the same meteorological conditions and with the same constant updraught speed enhanced by ubiquitous mesoscale gravity waves (wavy arrows).
In scenario Background, the total soot particle number concentrations approximately quantify the aviation contribution to the large-scale atmospheric particle background (see 'Methods' section). As entrainment rates diminish with plume age, the effect of dilution is stronger in scenario Plume than in Background. We describe the baseline against which the soot perturbation scenarios are compared by a simulation of unperturbed cirrus formation via homogeneous freezing of aqueous solution droplets (scenario Base). Using an updraught speed of 0.15 m s−1 to drive cirrus formation in all scenarios is representative of widespread mesoscale gravity wave activity23 (see 'Methods' section) and allows us to isolate and determine the characteristics of microphysical mechanisms underlying aircraft soot–cirrus interactions.
Although ice activity measurements of size-selected soot particles with mobility diameters <100 nm are not available, the theoretical underpinning of our soot-PCF parametrization builds strong confidence in applying it to such small particle sizes21. We find that only a fraction of contrail-processed soot particles is ice-active via soot-PCF depending on ice supersaturation and soot particle size on the premise that real aircraft soot particles have properties that are similar to the laboratory soot surrogates. Analysis of pore structures within soot aggregates reveals that high ice supersaturation near the homogeneous aerosol freezing limit (s ≈ 0.5) is required for significant ice activity to occur (see 'Methods' section). Microphysical and optical properties of unperturbed and soot-perturbed cirrus differ to the extent soot-derived ice crystals affect the evolution of ice supersaturation during cirrus formation.
We analyse the temporal evolution of the unperturbed cirrus from a scenario Base by focussing on key variables determining cloud radiative effects (Fig. 2). We use the short-wave (solar) cirrus optical depth as a measure of RF24. Ice nucleation occurs in an ICNC burst about 43 min after model initialisation when homogeneous freezing conditions are first met (at peak ice supersaturation). Thereafter, ice crystals continue to form homogeneously at the cloud top due to sustained cooling, but at a reduced rate (Supplementary Fig. 4), causing the ice water path to increase further with time. The maximum ICNC averaged over the cirrus column (≈500 L−1) slightly decreases due to plume dilution, adiabatic cooling and sedimentation. Ice crystal growth due to water vapour (H2O) deposition causes the column-averaged, mean ice crystal diameter to increase rapidly to about 20 μm and then gradually towards 30 μm. Optical depth increases accordingly and reaches a value of 0.3. Cirrus with low optical depth (<0.3) similar to AIC2 are most susceptible to INP perturbations. Column-averaged ice crystal size distributions reveal diameters up to about 20 μm at 45 min and 40 μm at 75 min (Fig. 3). ICNC and mean diameter, as well as ice water content and ice supersaturation, develop significant vertical structure resulting from the interplay between ice crystal nucleation, growth and sedimentation (Supplementary Fig. 4).
Fig. 2: Cirrus cloud microphysical and optical properties.
Shown are a, e, i ice water path, b, f, j column-average total ice crystal number concentration, c, g, k column-average number-weighted mean ice crystal diameter and d, h, l short-wave cirrus optical depth versus time past ice saturation initially reached at 10 km altitude. The top panel shows the homogeneous freezing-only reference scenario Base and corresponding results for the soot-perturbed scenarios Background and Plume are presented in the middle and bottom panel, respectively. The light grey curves repeat the Base results to facilitate comparison.
Fig. 3: Cirrus ice crystal size distributions.
Column-averaged results taken at (dashed) 45 min and (solid) 75 min for a the reference scenario and b, c the cirrus perturbation scenarios. Size distribution refers to number concentrations of ice crystals with diameters Di in the range [Di − ΔDi, Di + ΔDi] divided by the grid size resolution ΔDi/Di = 0.0235.
In scenarios Background and Plume, ice crystals already start to form on the subset of ice-active soot particles about 6 min earlier than in scenario Base (Fig. 2). The ice water paths of the maturing soot-perturbed cirrus exceed that of Base, because the soot-generated ice crystals settle into ice-supersaturated air and take up relatively large amounts of water vapour. The onset of homogeneous freezing is slightly delayed, associated ICNCs decrease by ≈150 L−1 (Background) and ≈300 L−1 (Plume) and ice crystal diameters increase. Adding a small number of INPs reduces the number concentrations of homogeneously produced ice crystals for two reasons: growth of soot-derived ice crystals lowers the ice supersaturation resulting from the updraught owing to initially high deposition coefficients (>0.1) and homogeneous freezing rates are very sensitive to even small changes in supersaturation. In addition, turbulent diffusion lowers homogeneously nucleated ICNCs in scenario Plume by flattening vertical supersaturation gradients20. Soot particles and ice crystals dilute faster and the latter achieve larger sizes in scenario Plume than in Background due to faster entrainment of ice-supersaturated air (Supplementary Table 1). As in scenario Base, optical depth is dominated by homogeneous freezing, decreasing only slightly by 0.08 in Plume and increasing by 0.06 in Background. The perturbed size distributions reveal a small number of ice crystals originating from soot with diameters of about 50 µm (Background) and 60–80 µm (Plume) that are absent in Base (Fig. 3). Liquid aerosol particles freeze within and above the soot layer (Supplementary Figs. 5 and 6), showing that aircraft soot is capable of modifying, but not preventing, homogeneous freezing.
Soot particles form significant amounts of ice crystals only around the time and location of peak ice supersaturation. In an ~250-m-thick layer, the highest soot-derived ICNCs are reached shortly before homogeneous freezing commences, namely 4.5 L−1 (Background) and 22 L−1 (Plume) (Supplementary Fig. 7). These values are lower than the upper-limit estimates of 5 and 54 L−1, respectively, reflecting the effect of plume dilution. Outside the ice supersaturation maximum, soot-derived ICNCs diminish rapidly reducing the peak column-integrated values further to 2.5 L−1 (Background) and 13.5 L−1 (Plume) (Fig. 2). Apart from the nonlinearities inherent to ice nucleation and growth processes across vertically inhomogeneous ice supersaturation profiles, dilution and diffusion prevent simple scaling of these results with total soot particle number concentrations.
The generation of few large ice crystals by aircraft soot particles and the reduction of ICNCs relative to Base is significant in both scenarios. However, changes in optical depth, a proxy of RF, are minor (≈20%), as it is dominated by homogeneous freezing in all scenarios. We include an additional sensitivity simulation Background—performed using a much lower (synoptic) updraught speed of 0.01 m s−1—to illustrate the competition between homogeneous and heterogeneous ice formation in weak forcing conditions, where INPs generally exert a greater impact. In this case, soot suppresses the initial ICNC burst, but homogeneous freezing still occurs at the cloud top and the resulting cirrus develops only low optical depth (<0.06) (Supplementary Fig. 8). As the vertical wind field is affected by rapid (few minutes) variabilities of ever-present mesoscale gravity wave activity superimposed onto synoptic motions, mean updraught speeds below 0.15 m s−1 are rarely observed in the upper troposphere and lower stratosphere23. Mean updraught speeds above 0.15 m s−1 strongly enhance the homogeneous freezing contribution, leading to optically thicker cirrus clouds and reducing the impact of ice-active soot particles. The use of a single updraught in simulations of cirrus ice formation does not capture the full variability in ICNCs. However, adding wave-induced temporal variability in vertical wind speeds leads to a massive broadening of ICNC frequency distributions due to multiple homogeneous freezing events25. Because this broadening would make it difficult to separate the effect of aircraft soot from wave-induced variability in ICNC, we did not represent such temporal variations in our simulations. Using a higher mean updraught speed or including updraught speed variability would diminish the impact of aircraft soot.
Our approach overestimates the soot impact, as we prescribe: a soot particle population with a high emission index, large modal size and size spread; a soot particle layer encompassing the peak supersaturation region; and a contact angle at the soot-liquid water interface representing bare (uncoated) soot particles with relatively high PCF activity (effects of atmospheric soot ageing processes relevant for aviation emissions beyond tb do not significantly increase their PCF activity). In addition, we assume the absence of coagulation and scavenging processes diminishing number concentrations of fresh soot particles over time26; efficient INPs from sources other than aviation competing with soot particles and overriding the soot impact; and, as detailed above, high mean updraught speeds or wave-induced temporal variability.
Relaxing any of these assumptions would lead to predictions with smaller or negligible soot-induced cirrus perturbations in our simulations. To illustrate this point, we performed sensitivity studies (see 'Methods' section and Supplementary Fig. 2), where either a lower-limit soot particle number-size distribution (PSD) or an increased contact angle was used. In both cases, ice-active soot particle number concentrations decrease more than tenfold, rendering ice activity practically negligible. Ozone oxidation can decrease contact angles on bare soot particle surfaces. However, gas-phase oxidation becomes increasingly hampered by condensation and coagulation processes that may lead to thick coatings as soot particles age. In the case of aircraft soot, contact angles decrease only slightly and therefore ice activity is not changed significantly (see 'Methods' section). More importantly, we expect aircraft soot particles to coexist with more efficient INPs, especially in the Northern Hemisphere. These INPs include mineral dust particles, which become ice-active already at low ice supersaturation27, where ice activity of aircraft soot particles as predicted by soot-PCF is basically absent (Supplementary Figs. 1 and 2). Aerosol particles emitted from ground sources were observed to be a common feature at cruise altitudes over the Central and Western USA7. Another field study revealed median concentrations of about 35,000 L−1 of refractory particles over the Eastern North Atlantic28, exceeding the total aircraft soot concentrations (Supplementary Table 1) in scenarios Plume (Background) by a factor of 6 (76). Together with the low ice activity of aircraft soot, this strongly suggests that aircraft-emitted soot particles cannot compete in cirrus formation with insoluble aerosol particles from other sources.
Our results challenge the prevailing notion of a large RF from aircraft soot–cirrus interactions. In a global climate model study addressing RF due to soot-perturbed, large-scale cirrus, soot particles released from long-lived AIC were assumed to act as good INPs already at moderate ice supersaturation, based solely on contrail formation and persistence criteria, unless they have collected multilayer sulphate coatings29. Positive RF values were obtained in sensitivity tests assuming low concentrations of ambient sulphate and dust aerosol particles. RF values were found to be negative when processed soot particles were added in areas where homogeneous freezing dominates cirrus formation. More recent studies based on the same global model30,31 suggest lower RF estimates, brought about by accounting for effects of waves on updraught speeds, changes in background aerosols and modifications of the underlying ice nucleation scheme, superseding the largest RF estimates3. Other global model studies32,33 calculated statistically insignificant RF when ascribing a small ice nucleation efficiency (0.1%) to all aircraft-emitted soot particles.
We identify a number of issues causing large negative RF values due to interactions between aircraft soot particles and large-scale cirrus to be overestimated in the global model studies. Firstly, the size-dependence of ice activity of soot particles processed in long-lived AIC due to soot-PCF is much stronger than predicted by parametrizations based on, e.g., immersion freezing34 and leads to lower soot-derived ICNCs. Scenario Background—intentionally overestimating the soot impact—yields a number concentration of ice-active soot particles of 5 L−1, significantly below the values of up to 100–300 L−1 of contrail-processed aircraft soot particles in regions with highest flight activities29. Inclusion of processing of soot emissions in short-lived contrails and Plume-type scenarios that both were not considered in the global model study would partially offset this decrease. Secondly, the parametrization employed in ref. 29 to represent cirrus ice formation reduces the homogeneous freezing contribution too strongly, because processed soot particles, either bare or with minimal coatings, were assumed to act as INPs already at moderate ice supersaturation (0.35, see ref. 30); and the effect of vertically inhomogeneous supersaturation on ice activation and nucleated ICNC as well as homogeneous aerosol freezing at cloud tops was not represented.
Ideally, targeted field measurements would be used to confirm the limited effect of aircraft soot on cirrus. In planning such observations, it is important to analyse the large-scale meteorological situation leading to cirrus formation after AIC dissipation. Emphasis should be placed on the time spans (t − td), since they indicate whether Plume-type (t → tp) or Background-type (t → tb) scenarios are realised. AIC lifetimes (td) and duration and extent of ice-supersaturated areas (affecting both tp and tb) in which AIC evolve show considerable latitudinal and seasonal variations18,35. To enable meaningful comparisons with results from detailed cloud models, airborne measurements should characterise soot PSDs and mixing state before and after contrail processing, and provide number concentration and chemical composition of INPs along with ice supersaturation, cloud ice microphysical properties and vertical air motion variability during cirrus formation. However, it will be challenging to identify aircraft soot-induced changes due to ubiquitous gravity wave activity and because INP other than aircraft soot particles might produce similar or larger perturbations.
Our study is based on an improved physical understanding that grew out of a process-based evaluation of laboratory measurements and by itself presents a major step forward in quantifying aircraft soot–cirrus interactions, although atmospheric conditions and aerosol particle properties are more variable than representable in cloud models. The simulations of a prototype cirrus cloud field experiment will guide the development of global model parametrizations of soot effects as well as future observational studies, enabling a more robust quantification of the associated RF. The results show that cirrus perturbations in optical depth, hence RF, from aircraft soot are minor due mainly to the small size of the soot particles, which severely limits their ice activity. It is likely that the magnitude of RF from aircraft soot–cirrus interactions is much smaller than AIC RF and may even be insignificant because our process-oriented analysis of soot–cirrus interactions using representative upper tropospheric conditions does not reveal a significant impact on cirrus optical depth despite changes in cloud structure. In view of the great difficulty in acquiring conclusive results from airborne measurements, our study provides a major constraint for improved simulations of aircraft soot effects on cirrus in the short term: at most one out of hundred emitted soot particles is ice-active after processing in contrails or contrail cirrus.
Cirrus cloud model
We employ a numerical model to predict the meteorological and microphysical evolution of an ice cloud column for a given vertical wind field20. Because processes controlling aerosol–cirrus interactions unfold on vertical scales of tens of metres and smaller, we use a one-dimensional model framework with a grid resolution of 1 m in the computational domain (altitude 9.5–12 km). The model simulates vertical advection of potential temperature, H2O and size-resolved aerosol particles across a fixed altitude grid. The non-equilibrium water content in aqueous solution droplets is predicted based on size- and composition-dependent hygroscopic and condensational growth rates. In addition, the model treats homogeneous freezing of these droplets and depositional growth of spherical ice crystals by deposition of H2O, and tracks ice crystals (advection and sedimentation). Aircraft soot PSDs are included based on in situ measurements as detailed below. All types of aerosol particles are transported separately for each size category and form ice crystals when the specific particle size and ice supersaturation conditions are met. To estimate short-wave cirrus optical depth, Mie scattering cross-sections are evaluated at a wavelength of 0.55 μm for non-absorbing ice crystals with a refractive index of 1.311. Turbulent diffusion of H2O and heat as well as entrainment of cloud-free air into the cloud column is additionally considered for AIC simulations. Vertical diffusivity is enhanced in young contrails due to decaying aircraft wake turbulence, approaching small ambient values over time. The entrained environmental air is simulated based on the same dynamical evolution as the cloud column, but without ice crystal formation.
Homogeneously nucleated ICNCs in cirrus depend on vertical wind speeds, which are strongly influenced by mesoscale gravity waves. In our simulations, the air is lifted with a constant updraught speed of 0.15 m s−1, exceeding synoptic values and driving cirrus formation far away from strong wave sources such as steep mountains or deep convection. The imposed updraught corresponds to a mean wave-driven enhancement representing the low end of measured values23. Molecular deposition coefficients control the efficiency of ice crystal growth from the vapour. Growth by uptake of H2O is governed by deposition coefficients depending on ice supersaturation, with additional dependencies on ice crystal size, temperature and pressure36. Deposition coefficients reduce to values <0.01 as growing ice crystals quench the ice supersaturation; the resulting slowdown of deposition growth feeds back on ice supersaturation and impacts the vertical distribution of cloud ice. More than 100,000 (10,000) simulation ice particles37, derived from the liquid (soot) aerosol and each containing a number of real ice crystals, are tracked through their lifecycles in each simulation.
Model initialisation
Initial temperature (230 K at the domain base with a constant lapse rate of −7 K km−1) and relative humidity profiles (Gaussian moist layer with a standard deviation of 350 m peaking at ice saturation at 10 km) are prescribed guided by airborne dropsonde and lidar measurements across flight levels (10–11 km) over the North Atlantic38, representing a typical midlatitude situation with regard to air traffic volume and contrail occurrence. Fully soluble aerosol particles with a hygroscopicity parameter of 0.3, typical for continental accumulation mode aerosols39, are distributed log-normally with a total number concentration of 500,000 L−1 (500 cm−3), modal dry radius of 20 nm and geometric standard deviation of 1.5. Midlatitude upper tropospheric aqueous solution droplet concentrations typically lie in the range 200–2000 cm−3, encompassing large-scale areas where accumulation mode aerosol particles age by coagulation and localised regions where new particles form via gas-to-particle conversion40. Our simulations do not capture variability in ICNCs induced by variations in soluble aerosol properties. However, the susceptibility of homogeneously nucleated ICNCs to soluble aerosol particle properties is weak for updraught speeds and temperatures relevant for the conditions in our simulations41,42,43. The choice of soluble aerosol parameters within reasonable bounds is not crucial for our upper-limit estimates of soot-induced cirrus perturbations. For the given updraught speed, homogeneous freezing of particles across this size distribution occurs in a narrow range around an ice supersaturation of 0.5 at temperatures near 220 K. PSDs are evenly spread in a 300 (400) m deep layer centred at 10 km altitude around the peak supersaturation in scenario Plume (Background). The layer depths are estimated using upper tropospheric vertical diffusivities. The total number emission index, EI = 1015 (kg-fuel)−1, lies in the upper range of most measurements2 and model predictions covering a wide range of aircraft and emission conditions44,45.
Soot particle properties
While most in-flight measurements show that number-size distributions of freshly emitted soot particles in dry exhaust plumes (no contrail processing) are monomodal46, some show a second, large particle mode in both, near-field contrails47 and dry plumes48,49. The origin of the large mode is unclear. In young plumes (age ≈ 1 s), coagulation rates among soot particles diminish rapidly due to plume dilution, preventing the formation of a large mode by collisions26. This suggests that bimodality in soot PSDs does not result from compaction after contrail processing or from coagulation, but can already be a feature of fresh (unprocessed) jet engine emissions. More research is needed to resolve this issue.
PSDs in fresh aircraft exhaust plumes may be approximated by log-normal functions. We employ upper-limit values for the modal dry mobility diameter, Dm = 35.7 nm, and geometric standard deviation, σ = 1.8, of a large number of PSDs inferred from measurements behind a commercial aircraft equipped with turbofan jet engines burning conventional jet fuel at cruising thrust conditions46. In sensitivity studies, we employ a PSD using measured lower-limit values from the same study, Dm = 22.8 nm and σ = 1.65. We note that a reduction in Dm, or EI, replicates the effect of burning pure biofuel or blends of kerosene and biofuel46,50. The measured upper-limit parameters are close to average values predicted by a global aircraft soot emission inventory in cruise conditions44, while the lower-limit values are only rarely found in this inventory. Bimodal PSDs48 are also used in sensitivity studies.
We estimate total soot particle number concentrations to initialise the simulation scenarios Background and Plume at the respective times, tb and tp, based on a single plume dispersion model51. Values taken at a jet engine's nozzle exit plane scale with the soot emission index, c · EI, with c = 2.43 × 10−6 (kg-fuel) L−1. To account for plume dilution after a typical jet mixing timescale, τ = 0.01 s, the exit plane concentration diminishes over time, t, by the dilution factor d = (τ/t)β, which derives from the entrainment rate, β/t, and together with β = 1.075 represents average conditions. We estimate total particle number concentrations, ntot = c · EI · d(t), to be on the order of 1000 L−1 for plume ages of several hours. Values taken at t = tb and tp and those of other model parameters are summarised in Supplementary Table 1.
These estimates do not account for multiple overlapping plumes enhancing number concentrations. The likelihood of plumes overlapping increases with time as plume widths increase due to wind shear; therefore, ongoing dilution reduces ntot-values in individual plumes. This particularly applies to potentially wide plumes in scenario Background. In the atmosphere, both effects can offset each other, so that ntot = 458 L−1 in scenario Background taken at tb = 5 h approximately represents large-scale number concentrations of aircraft soot particles resulting from mixing of plumes older than 5 h. The impact of dilution on ice microphysics in this scenario is small, as the entrainment-mixing timescale of about 5 h (inferred from Supplementary Table 1) is much longer than the timescales of processes governing the cirrus ice formation event.
Ice activity of contrail-processed soot particles
We assume that ice forms on bare aircraft soot particles via PCF, with homogeneous freezing of pure, supercooled liquid pore water in soot aggregates15,16. Because temperatures are low at cirrus levels and pore water is under increasingly high tension with decreasing ice supersaturation, pore water freezes almost instantly once the pore volume is large enough to host a critical ice germ52. The resulting nanoscopic pore ice can then grow into a macroscopic ice crystal at sufficient ice supersaturation to overcome the Kelvin energy barrier. The main factors controlling the ability of a pore to take up liquid water are its diameter and opening angle, and the contact angle between the pore wall material and water. The pore structure of soot aggregates is complex and comprises a range of different pore types53. We have shown that three-membered and four-membered ring pores, consisting of three and four primary particles in contact with each other, are frequent features in soot aggregates and explain the ability of soot particles to form ice via soot-PCF21. In the same study, we have demonstrated that the ice supersaturation required for soot-PCF to occur in such ring pores is linked to a small set of physical properties of the soot particles that can be readily quantified experimentally: the size of and overlap between primary particles, aggregate size and fractal dimension and the contact angle at the soot/liquid water interface. The fractal dimension is a measure for the number of contacts between primary soot particles and increases with compaction.
We apply the soot-PCF framework, developed based on ice nucleation measurements of well-characterised soot particles using a continuous flow diffusion chamber, to particles with properties characteristic for aircraft soot. The probability of a soot particle to contain a ring pore with N members depends on the number of spherical primary particles making up a soot aggregate, Np, which in turn depends on the primary particle diameter, Dp, the gyration diameter, Dg, and the aggregate fractal dimension, δ:
$${N}_{\mathrm{p}}={k}_{0}{\left(\frac{{D}_{\mathrm{g}}}{{D}_{\mathrm{p}}}\right)}^{\delta },{D}_{\mathrm{g}}=D/1.29$$
where D is the dry mobility diameter and k0 is a scaling factor. For diffusion-limited cluster-cluster aggregation, k0 = 1.3 and δ = 1.78 54. Sizes of primary aircraft soot particles depend on the engine thrust level and the type of fuel burnt. We choose Dp = 20 nm to calculate Np as a function of D, which reflects the predominant primary size of soot particles emitted at cruising thrust50. The fraction of soot aggregates with a given mobility diameter D that activates at ice supersaturation s, the s-cumulative ice-active fraction (AF), depends on Np, on the ice-active site probability function, PN, denoting the probability of a primary particle to be part of a ring pore inducing PCF at s, and on the number of neighbouring particles, N, of each primary particle:
$${{AF}}=1-{(1-{P}_{N})}^{\alpha },\alpha ={({N}_{\mathrm{p}}-N)}^{\delta }.$$
The exponent α quantifies the probability that a soot aggregate contains a primary particle within a ring structure21; subtracting N from Np avoids overcounting ring structures. We choose N = 2, reflecting the minimum number of three primary particles to form a three-membered ring pore. The fact that AF is cumulative in s reflects laboratory measurements in which the ice nucleation activity of an aerosol sample is often probed at successively increasing supersaturation.
Knowledge of PN is sufficient to describe the ice activity of soot aggregates with different sizes from the same emission source, assuming the same contact angle21. To represent contrail-processed aircraft soot, PN is compatible with the primary particle size distribution measured at 65% thrust level when burning standard Jet A-1 fuel50. The PN covers supersaturation-dependent activation probabilities by three-membered and four-membered ring pores with primary soot particle diameters and overlaps ranging from 5 to 40 nm and 0.01 to 0.2, respectively, and is fitted as:
$${\log }_{10}({P}_{N})=\frac{1}{a+b(s+1)}$$
with a = 0.378 and b = −0.462 for a soot-water contact angle, θ, of 60°.
Role of soot particle ageing
Ice-active soot particle fractions exhibit a marked dependence on θ, which can be affected by atmospheric ageing processes. Ageing of soot particles impacts their ability to influence clouds and climate55 and predominantly occurs via (i) condensation of semi-volatile compounds, (ii) coagulation with ambient aerosol particles and (iii) oxidation from the gas phase56. While monolayer sulphuric acid (H2SO4) coatings have little effect for heterogeneous ice nucleation on soot particles57, the ability of PCF is significantly reduced if soot pores are filled with H2SO4 or organic carbon58,59 via processes (i) and (ii). If soot particles obtain thick (multilayer) coatings, the ice nucleation mode shifts from PCF to immersion freezing, where soot is an inefficient INP11,57,59. Therefore, only the ageing process (iii) is capable of enhancing PCF efficiency by decreasing θ.
Data of experimentally determined contact angles relating to aircraft soot are scarce. We first discuss soot-water contact angles associated with unaged soot particles. Persiantseva et al.60 produced soot particles in a gas-turbine combustion chamber setup by burning propane-butane mixtures, mimicking aircraft cruise combustion conditions, and found θ = 63°. The same study reported higher values (70°–80°) when burning the aviation fuels TC1 (fuel sulphur content > 0.11 wt%), T6 (0.05 wt%) and kerosene in an oil lamp setup. Contact angles were found to decrease for TC1 and kerosene soot only after heating and outgassing, processes that are not relevant for the upper troposphere. Using the same technique, Popovicheva et al.61 found θ = 59° ± 4° and 69° ± 4° when burning freshly prepared TC1 and long-time stored (oxidised) TC1 kerosene, respectively. A value of 59° was found for TS1 kerosene fuel using the same approach62. Shonija et al.63 found the contact angle of fresh TC1 soot to be 69°. Significantly higher values (≈ 97°–107°) were reported for unaged kerosene soot64.
We now address how ageing via gas-phase oxidation (process (iii)) might affect θ. Wei et al.64 found θ to decrease by ≈5–10° after exposing kerosene soot to ozone mixing ratios of 40 ppb over a period of ten days, which is at the high end of tropospheric soot particle lifetimes of ≈3–11 d 56. Following ref. 64 and doubling the ozone mixing ratio to 80 ppbv to account for mixing of upper tropospheric and lower stratospheric air masses, θ might decrease by ≈15–20° over 10 d. Lifetimes in the upper troposphere and tropopause region can be longer, especially for particles emitted at high altitudes as in the case of aircraft emissions. Doubling the lifetime to 20 d with 40 ppbv of ozone would lead to the same decrease. However, ageing of aircraft-emitted soot particles already occurs in evolving exhaust plumes26 via condensation of H2SO4 formed by oxidation of SO2 vapour from the exhaust (process (i)) and coagulation with liquid ambient and plume aerosol particles (process (ii)). Sulphuric acid hardly oxidises soot surfaces65; therefore, exposure to H2SO4 has next to no effect on θ. Moreover, a H2SO4 coating shields the soot surface preventing oxidation from ozone66. Overall, this limits the time available to reduce θ via ozone oxidation to 1–2 days depending on jet fuel sulphur content and ambient aerosol load26; the above rough estimates for reductions in θ reduce to a few degrees. In summary, our default value of 60° for unaged aircraft soot, which is already at the low end of data, would decrease only slightly.
Some fraction of aircraft soot emissions are emitted above the tropopause with residence times that can be longer than the 10–20 days assumed here. Contrails form less frequently in the dry lowermost stratosphere2, so that soot particles emitted there do not frequently get ice-active to begin with. The contribution of stratospherically aged soot particles to ice formation via PCF is likely negligible due to condensation of H2SO4 and coagulation with particles in aircraft exhaust plumes and from the H2SO4/H2O background aerosol. Additional work is needed to better constrain the lifetime and properties of soot particles depending on the location of emissions. This will ultimately allow to fully establish the ice-active contribution of aged soot particles from aviation to cirrus formation.
Since AF depends strongly on soot aggregate size, PSDs play a crucial role. Supplementary Fig. 1 shows that most aircraft soot aggregates are not significantly larger than typical primary particles (20 nm) and only some exceed mobility diameters of 200 nm (panel a). Ice activity is negligible (AF < 0.001) for θ = 60° and soot particles with diameters up to about 40 nm. Full activation (AF → 1) is predicted for D > 200 nm only at high ice supersaturation (panel b). The ice-active PSDs, obtained by multiplying AF with the soot PSD (normalised to unity), show that PCF maximises for D ≈ 90 nm (panel c). Integrating over all sizes, we find that only a minor fraction, f, of emitted soot particles can become ice-active, depending on s (panel d). Values larger than f = 0.01 are not achieved owing to homogeneous ice nucleation and growth.
Ice activity spectra
The dependence of ice-active soot particle fractions on ice supersaturation is given by integrating the ice-active PSDs shown in Supplementary Fig. 1c over all sizes:
$$f(s)=\mathop{\int }\limits_{0}^{\infty }{\mathrm{PSD}}(D){{AF}}(D,s){\mathrm{d}}D$$
using normalised PSDs. Results of sensitivity studies with several observed size distributions using the default contact angle of 60° show that the bimodal PSDs from refs. 47,48 cause much fewer ice-active particle fractions across all supersaturations as compared to our default upper-limit monomodal PSD due to smaller abundances of large particles (Supplementary Fig. 2). Clearly, PSDs are crucial for accurately representing soot–cirrus interactions in models.
Results of sensitivity studies with different contact angles are also shown. Increasing θ by 10° to a value of 70° renders ice activity negligible for both monomodal PSDs. We also show results of an extreme case with θ decreased by 15° to a value of 45°, greatly overestimating the effect of large-scale soot ageing via ozone oxidation by dismissing the possibility of turning externally mixed soot emissions into internally mixed particles already on the plume scale. As a result, the ice activity spectra peak at 5.5 (0.6)% for the upper-limit (lower-limit) PSD. More frequently observed aircraft soot PSDs, characterised by average parameters46, Dm ≈ 29 nm and σ ≈ 1.7, would yield peak ice activity fractions near 1%, similar to our simulation scenarios. We conclude that the effects of atmospheric ageing on the ice activity of aircraft soot particles are most likely negligible and that our conclusions hold true for both unaged and aged particles.
Total ice-active soot particle number concentrations
To obtain analytical estimates of soot particle number concentrations that are ice-active at ice supersaturation s at a given time past emission, we multiply f(s) with ntot(t). For Dm = 35.7 nm, σ = 1.8, θ = 60° and s = 0.5, this results in f = 0.01 (Supplementary Figs. 1d and 2), i.e. at most 1% of the emitted soot particles are ice-active. When taken at tp = 0.5 h and tb = 5 h, values f · ntot indicate the magnitude of the largest possible perturbation in scenarios Plume (≈54 L−1) and Background (≈5 L−1), respectively (Supplementary Fig. 3). As the numerical simulations start at ice saturation, additional dilution of soot particles until homogeneous freezing commences is not included in these analytical estimates.
A note on pre-activation
Porous particles such as soot can potentially also form cloud ice crystals via pre-activation. Pre-activation occurs when ice from a previous ice nucleation event is retained in aggregates within nanometre-sized pores at temperatures below the bulk melting point and below ice saturation. The ice phase may then grow upon re-encountering sufficiently high ice supersaturation. Hence, pre-activation describes an ice growth process circumventing the freezing nucleation step. Since homogeneous freezing rates of liquid pore water are high at cirrus temperatures, PCF is constrained either by water condensation into wider pores or ice growth out of narrower pores. Therefore, pre-activation is not relevant when ice growth limits PCF, as is the case for the narrow three-membered and four-membered ring pores that occur when the overlap between primary particles is large. Moreover, when PCF is limited by water condensation, pore ice sublimates already at slight ice subsaturation (s < 0). We, therefore, consider the effect of pre-activation negligible for the pore types that dominate ice formation via PCF in aircraft soot aggregates21.
Implementation of ice-active fractions in the cirrus model
In the numerical simulations, soot particle number concentrations as small as 0.1 (0.01) L−1 are resolved in scenario Plume (Background) and converted to simulation ice particles once they become ice-active. These low threshold number concentrations ensure that ice crystal formation is simulated in soot particles with mobility diameters as low as about 40 nm.
We would overestimate soot-derived ICNCs significantly in our numerical simulations when using the s-cumulative AF parametrization, Eq. (2), because soot particles are depleted from the PSD after previous ice formation events. Instead, we estimate the number fraction of particles that activate during a small increase in ice supersaturation (determined by the model time step that is subject to accuracy constraints) in a given size range (determined by the soot PSD that is discretised with fine size resolution, ΔD/D = 0.135) from a parametrization of differential AF, dAF, consistent with the cumulative AF.
To derive dAF, it is convenient to view the s-cumulative AF as the probability to activate aircraft soot particles of size D, resulting from the statistical outcome of many identically prepared laboratory measurements. Defining Δsj = sj − sj−1, with s0 = 0 and AF(s0) = 0, dropping D from the argument list and noting that dAF(sj) shall describe the fraction of soot particles frozen only within Δsj, the total fraction frozen at sj is given by those particles that did not activate into ice (with probability 1 − dAF(sj)) in every interval Δsj prior to sj:
$${{AF}}({s}_{j})=1-[1-{{dAF}}({s}_{1})]\times \ldots \times [1-{{dAF}}({s}_{j-1})]\times [1-{{dAF}}({s}_{j})].$$
With dAF(s1) = AF(s1), dAF(sj)-values for j > 1 are obtained by recursion. In the simulations, we used constant intervals, Δsj = 0.01.
The data that support the findings of this study are available in Zenodo with the identifier https://doi.org/10.5281/zenodo.4709994.
A full description of the cirrus cloud model employed in this study is available at https://doi.org/10.1029/2019JD031847. The source code is not publicly available.
Fahey, D. W. & Schumann, U. in Aviation and the Global Atmosphere. A Special Report of IPCC Working Groups I and III. Intergovernmental Panel on Climate Change (ed. Penner, J. E.) (Cambridge University Press, 1999).
Kärcher, B. Formation and radiative forcing of contrail cirrus. Nat. Commun. 9, https://doi.org/10.1038/s41467-018-04068-0 (2018).
Lee, D. S. et al. The contribution of global aviation to anthropogenic climate forcing for 2000 to 2018. Atmos. Environ. 244, https://doi.org/10.1016/j.atmosenv.2020.117834 (2021).
Tesche, M., Achtert, P., Glantz, P. & Noone, K. J. Aviation effects on already-existing cirrus clouds. Nat. Commun. 7, https://doi.org/10.1038/ncomms12016 (2016).
Bock, L. & Burkhardt, U. Contrail cirrus radiative forcing for future air traffic. Atmos. Chem. Phys. 19, https://doi.org/10.5194/acp-19-8163 (2019).
Le Quéré, C. et al. Temporary reduction in daily global CO2 emissions during the COVID-19 forced confinement. Nat. Clim. Chang. 10, https://doi.org/10.1038/s41558-020-0797-x (2020).
Toon, O. B. & Miake-Lye, R. C. Subsonic Aircraft: Contrail and Cloud Effects Special Study (SUCCESS). Geophys. Res. Lett. 25, https://doi.org/10.1029/98GL00839 (1998).
Ström, J. & Ohlsson, S. In situ measurements of enhanced crystal number densities in cirrus clouds caused by aircraft exhaust. J. Geophys. Res. 103, https://doi.org/10.1029/98JD00807 (1998).
Urbanek, B. et al. High depolarization ratios of naturally occurring cirrus clouds near air traffic regions over Europe. Geophys. Res. Lett. 45, https://doi.org/10.1029/2018GL079345 (2018).
Jensen, E. J. & Toon, O. B. The potential impact of soot particles from aircraft exhaust on cirrus clouds. Geophys. Res. Lett. 24, https://doi.org/10.1029/96GL03235 (1997).
Kanji, Z. A., Welti, A., Corbin, J. C. & Mensah, A. A. Black carbon particles do not matter for immersion mode ice nucleation. Geophys. Res. Lett. 47, https://doi.org/10.1029/2019GL086764 (2020).
Schill, G. P. et al. The contribution of black carbon to global ice nucleating particle concentrations relevant to mixed-phase clouds. Proc. Natl. Acad. Sci. USA 117, https://doi.org/10.1073/pnas.2001674117 (2020).
Mahrt, F. et al. Ice nucleation abilities of soot particles determined with the Horizontal Ice Nucleation Chamber. Atmos. Chem. Phys. 18, https://doi.org/10.5194/acp-18-13363-2018 (2018).
Nichman, L. et al. Laboratory study of the heterogeneous ice nucleation on black-carbon-containing aerosol. Atmos. Chem. Phys. 19, https://doi.org/10.5194/acp-19-12175-2019 (2019).
Marcolli, C. Deposition nucleation viewed as homogeneous or immersion freezing in pores and cavities. Atmos. Chem. Phys. 14, https://doi.org/10.5194/acp-14-2071-2014 (2014).
David, R. O. et al. Pore condensation and freezing is responsible for ice formation below water saturation for porous particles. Proc. Natl. Acad. Sci. USA 116, https://doi.org/10.1073/pnas.1813647116 (2019).
Mahrt, F. et al. The impact of cloud processing on the ice nucleation abilities of soot particles at cirrus temperatures. J. Geophys. Res. 124, https://doi.org/10.1029/2019JD030922 (2020).
Petzold, A. et al. Ice-supersaturated air masses in the northern mid-latitudes from regular in situ observations by passenger aircraft: vertical distribution, seasonality and tropospheric fingerprint. Atmos. Chem. Phys. 20, https://doi.org/10.5194/acp-20-8157-2020 (2020).
Burkhardt, U., Bock, L. & Bier, A. Mitigating the contrail cirrus climate impact by reducing aircraft soot number emissions. npj Clim. Atmos. Sci. 1, https://doi.org/10.1038/s41612-018-0046-4 (2018).
Kärcher, B. Process-based simulation of aerosol-cloud interactions in a one-dimensional cirrus model. J. Geophys. Res. 125, https://doi.org/10.1029/2019JD031847 (2020).
Marcolli, C., Mahrt, F. & Kärcher, B. Soot-PCF: pore condensation and freezing framework for soot aggregates. Atmos. Chem. Phys. Discuss. https://doi.org/10.5194/acp-2020-1134 (2020).
DeMott, P. J. et al. Measurements of the concentration and composition of nuclei for cirrus formation. Proc. Natl. Acad. Sci. USA 100, https://doi.org/10.1073/pnas.253267710 (2003).
Kärcher, B. & Podglajen, A. A stochastic representation of temperature fluctuations induced by mesoscale gravity waves. J. Geophys. Res. 124, https://doi.org/10.1029/2019JG030680 (2019).
Meerkötter, R. et al. Radiative forcing by contrails. Ann. Geophys. 17, https://doi.org/10.1007/s00585-999-1080-7 (1999).
Kärcher, B., Jensen, E. J. & Lohmann, U. The impact of mesoscale gravity waves on homogeneous ice nucleation in cirrus clouds. Geophys. Res. Lett. 46, https://doi.org/10.1029/2019GL082437 (2019).
Kärcher, B., Möhler, O., DeMott, P. J., Pechtl, S. & Yu, F. Insights into the role of soot aerosols in cirrus cloud formation. Atmos. Chem. Phys. 7, https://doi.org/10.5194/acp-7-4203-2007 (2007).
Hoose, C. & Möhler, O. Heterogeneous ice nucleation on atmospheric aerosols: a review of results from laboratory experiments. Atmos. Chem. Phys. 12, https://doi.org/10.5194/acp-12-9817-2012 (2012).
Minikin, A. et al. Aircraft observations of the upper tropospheric fine particle aerosol in the Northern and Southern Hemispheres at midlatitudes. Geophys. Res. Lett. 30, https://doi.org/10.1029/2002GL016458 (2003).
Zhou, C. & Penner, J. E. Aircraft soot indirect effect on large-scale cirrus clouds: is the indirect forcing by aircraft soot positive or negative? J. Geophys. Res. 119, https://doi.org/10.1002/2014JD021914 (2014).
Penner, J., Zhou, C., Garnier, A. & Mitchell, D. L. Anthropogenic aerosol indirect effects in cirrus clouds. J. Geophys. Res. 123, https://doi.org/10.1029/2018JD029204 (2018).
Zhu, J. & Penner, J. E. Radiative forcing of anthropogenic aerosols on cirrus clouds using a hybrid ice nucleation scheme. Atmos. Chem. Phys. 20, https://doi.org/10.5194/acp-20-7801-2020 (2020).
Gettelman, A. & Chen, C. The climate impact of aviation aerosols. Geophys. Res. Lett. 20, https://doi.org/10.1002/grl.5052 (2013).
Pitari, G. et al. Impact of coupled NOx/aerosol aircraft emissions on ozone photochemistry and radiative forcing. Atmosphere 6, https://doi.org/10.3390/atmos6060751 (2015).
Liu, X. & Penner, J. E. Ice nucleation parameterization for global models. Meteorol. Z. 14, https://doi.org/10.1127/0941-2948/2005/0059 (2005).
Irvine, E. A., Hoskins, B. J. & Shine, K. P. A Lagrangian analysis of ice-supersaturated air over the North Atlantic. J. Geophys. Res. 119, https://doi.org/10.1002/2013JD020251 (2013).
Harrington, J. Y., Moyle, A. & Hanson, E. On calculating deposition coefficients and aspect ratio evolution in approximate models of ice crystal vapor growth. J. Atmos. Sci. 76, https://doi.org/10.1175/JAS-D-18-0319.1 (2019).
Sölch, I. & Kärcher, B. A large-eddy model for cirrus clouds with explicit aerosol and ice microphysics and Lagrangian ice particle tracking. Q. J. Roy. Meteorol. Soc. 136, https://doi.org/10.1002/qj.689 (2010).
Schäfler, A. et al. The North Atlantic waveguide and downstream impact experiment. Bull. Amer. Meteor. Soc. 99, https://doi.org/10.1175/BAMS-D-17-0003.1 (2018).
Petters, M. D. & Kreidenweis, S. M. A single parameter representation of hygroscopic growth and cloud condensation nucleus activity. Atmos. Chem. Phys., 8, https://doi.org/10.5194/acp-7-1961-2007 (2008).
Schröder, F., Kärcher, B., Fiebig, M. & Petzold, A. Aerosol states in the free troposphere at northern midlatitudes. J. Geophys. Res. 107, https://doi.org/10.1029/2000JD000194 (2002).
Kärcher, B. & Lohmann, U. A parameterizationof cirrus cloud formation: Homogeneous freezing of supercooled aerosols. J. Geophys. Res. 107, https://doi.org/10.1029/2001JD000470 (2002).
Hoyle, C. R., Luo, B. P. & Peter, T. Theorigin of high ice crystal number densities in cirrus clouds. J. Atmos. Sci. 62, https://doi.org/10.1175/JAS3487.1 (2005).
Kay, J. E. & Wood, R. Timescale analysis ofaerosol sensitivity during homogeneous freezing and implications for uppertropospheric water vapor budgets. Geophys. Res. Lett. 35, https://doi.org/10.1029/2007GL032628 (2008).
Zhang, X., Chen., X. & Wang, J. A number-based inventory of size-resolved black carbon particle emissions by global civil aviation. Nat. Commun. 10, https://doi.org/10.1038/s41467-019-08491-9 (2019).
Teoh, R. et al. A methodology to relate black carbon particle number and mass emissions. J. Aerosol Sci. 132, https://doi.org/10.1016/j.jaerosci.2019.03.006 (2019).
Moore, R. H. et al. Biofuel blending reduces particle emissions from aircraft engines at cruise conditions. Nature 543, https://doi.org/10.1038/nature21420 (2017).
Petzold, A., Ström, J., Ohlsson, S. & Schröder, F.-P. Elemental composition and morphology of ice crystal residual particles in cirrus clouds and contrails. Atmos. Res. 49, https://doi.org/10.1016/S0169-8095(97)00083-5 (1998).
Petzold, A., Döpelheuer, A., Brock, C. A. & Schröder, F. In situ observations and model calculations of black carbon emission by aircraft at cruise altitude. J. Geophys. Res. 104, https://doi.org/10.1029/1999JD900460 (1999).
Hagen, D. E., Whitefield, P. D. & Schlager, H. Particulate emissions in the exhaust plume from commercial jet aircraft under cruise conditions. J. Geophys. Res. 101, https://doi.org/10.1029/95JD03276 (1996).
Liati, A. et al. Electron microscopic study of soot particulate matter emissions from aircraft turbine engines. Environ. Sci. Technol. 48, https://doi.org/10.1021/es501809b (2014).
Kärcher, B., Burkhardt, U., Bier, A., Bock, L. & Ford, I. J. The microphysical pathway to contrail formation. J. Geophys. Res. 120, https://doi.org/10.1029/2015JD023491 (2015).
Marcolli, C. Technical note: fundamental aspects of ice nucleation via pore condensation and freezing including Laplace pressure and growth into macroscopic ice. Atmos. Chem. Phys. 20, https://doi.org/10.5194/acp-20-3209-2020 (2020).
Rockne, K. J., Taghon, G. L. & Kosson, D. S. Pore structure of soot deposits from several combustion sources. Chemosphere 41, https://doi.org/10.1016/S0045-6535(00)00040-0 (2000).
Sorenson, C. M. The mobility of fractal aggregates: a review. Aerosol Sci. Technol. 45, https://doi.org/10.1080/02786826.2011.560909 (2011).
Lohmann, U. et al. Future warming exacerbated by aged-soot effect on cloud formation. Nat. Geosci. 13, https://doi.org/10.1038/s41561-020-0631-0 (2020).
Bond, T. C. et al. Bounding the role of black carbon in the climate system: A scientific assessment. J. Geophys. Res. 118, https://doi.org/10.1002/jgrd.50171 (2013).
DeMott, P. J., Chen, Y., Kreidenweis, S. M., Rogers, D. C. & Sherman, D. E. Ice formation by black carbon particles. Geophys. Res. Lett. 26, https://doi.org/10.1029/1999GL900580 (1999).
Zhang, C. et al. The effects of morphology, mobility size and SOA material coating on the ice nucleation activity of black carbon in the cirrus regime. Atmos. Chem. Phys. 20, https://doi.org/10.5194/acp-2020-809 (2020).
Crawford, I. et al. Studies of propane flame soot acting as heterogeneous ice nuclei in conjunction with single particle soot photometer measurements. Atmos. Chem. Phys. 11, https://doi.org/10.5194/acp-11-9549-2011 (2011).
Persiantseva, N. M., Popovicheva, O. B. & Shonija, N. K. Wetting and hydration of insoluble soot particles in the upper troposphere. J. Environ. Monit. 6, https://doi.org/10.1039/B407770A (2004).
Popovicheva, O. et al. Effect of soot on immersion freezing of water and possible atmospheric implications. Atmos. Res. 90, https://doi.org/10.1016/j.atmosres.2008.08.004 (2008).
Kireeva, E. D., Popovicheva, O. B., Persiantseva, N. M., Khokhlova, T. D. & Shonija, N. K. Effect of black carbon particles on the efficiency of water droplet freezing. Colloid J. 71, https://doi.org/10.1134/s1061933x09030090 (2009).
Shonija, N. K., Popovicheva, O. B., Persiantseva, N. M., Savel'ev, A. M. & Starik, A. M. Hydration of aircraft engine soot particles under plume conditions: Effect of sulfuric and nitric acid processing. J. Geophys. Res. 112, https://doi.org/10.1029/2006JD007217 (2007).
Wei, Y., Zhang, Q. & Thompson, J. E. The wetting behavior of fresh and aged soot studied through contact angle measurements. Atmos. Clim. Sci. 7, https://doi.org/10.4236/acs.2017.71002 (2017).
Zhang, D. & Zhang, R. Laboratory investigation of heterogeneous interaction of sulfuric acid with soot. Environ. Sci. Technol. 39, https://doi.org/10.1021/es050372d (2005).
Ray, D. et al. Hygroscopic coating of sulfuric acid shields oxidant attack on the atmospheric pollutant benzo(a)pyrene bound to model soot particles. Sci. Rep. 8, https://doi.org/10.1038/s41598-017-18292-z (2018).
We thank Andreas Schäfler and Christoph Kiemle for providing us with in situ measurements of vertical profiles of moisture and temperature. We gratefully acknowledge David Fahey for commenting on the manuscript. F.M. received funding from the European Union's Horizon 2020 research and innovation programme under the Marie Skłodowska-Curie grant Agreement No. 890200.
Open Access funding enabled and organized by Projekt DEAL.
Institute of Atmospheric Physics, DLR Oberpfaffenhofen, Wessling, Germany
Bernd Kärcher
Department of Chemistry, University of British Columbia, Vancouver, BC, Canada
Fabian Mahrt
Laboratory of Environmental Chemistry, Paul Scherrer Institute, Villigen, Switzerland
Institute for Atmospheric and Climate Science, ETH, Zurich, Switzerland
Claudia Marcolli
B.K. designed the research, carried out the simulations and wrote the original manuscript. F.M. composed the storyline graphic. C.M. and F.M. devised and developed the parametrization of aircraft soot-induced ice formation and growth. All authors jointly discussed the overall concept and methods, interpreted the simulation results and contributed to the final writing.
Correspondence to Bernd Kärcher.
The authors declare no competing interests.
Peer review information Primary handling editor: Heike Langenberg.
Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Peer Review File
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.
Kärcher, B., Mahrt, F. & Marcolli, C. Process-oriented analysis of aircraft soot-cirrus interactions constrains the climate impact of aviation. Commun Earth Environ 2, 113 (2021). https://doi.org/10.1038/s43247-021-00175-x
DOI: https://doi.org/10.1038/s43247-021-00175-x
By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.
Reviews & Analysis
Editorial Values Statement
Journal Impact
Search articles by subject, keyword or author
Show results from All journals This journal
Explore articles by subject
Communications Earth & Environment (Commun Earth Environ) ISSN 2662-4435 (online)
nature.com sitemap
Articles by subject
Protocol Exchange
Nature portfolio policies
Author & Researcher services
Language editing
Scientific editing
Nature Research Academies
Libraries & institutions
Librarian service & tools
Librarian portal
Partnerships & Services
Nature Careers
Nature Conferences
Nature Africa
Nature China
Nature India
Nature Italy
Nature Japan
Nature Korea
Nature Middle East
© 2022 Springer Nature Limited
Close banner Close
Sign up for the Nature Briefing newsletter — what matters in science, free to your inbox daily.
I agree my information will be processed in accordance with the Nature and Springer Nature Limited Privacy Policy.
Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
|
CommonCrawl
|
Search SpringerLink
Soil salinity, household wealth and food insecurity in tropical deltas: evidence from south-west coast of Bangladesh
Part of a collection:
Human Security and Social Inclusion
Sylvia Szabo1,
Md. Sarwar Hossain2,
W. Neil Adger3,
Zoe Matthews1,
Sayem Ahmed4,
Attila N. Lázár5 &
Sate Ahmad4
Sustainability Science volume 11, pages 411–421 (2016)Cite this article
As a creeping process, salinisation represents a significant long-term environmental risk in coastal and deltaic environments. Excess soil salinity may exacerbate existing risks of food insecurity in densely populated tropical deltas, which is likely to have a negative effect on human and ecological sustainability of these regions and beyond. This study focuses on the coastal regions of the Ganges–Brahmaputra delta in Bangladesh, and uses data from the 2010 Household Income and Expenditure Survey and the Soil Resource Development Institute to investigate the effect of soil salinity and wealth on household food security. The outcome variables are two widely used measures of food security: calorie availability and household expenditure on food items. The main explanatory variables tested include indicators of soil salinity and household-level socio-economic characteristics. The results of logistic regression show that in unadjusted models, soil salinisation has a significant negative effect on household food security. However, this impact becomes statistically insignificant when households' wealth is taken into account. The results further suggest that education and remittance flows, but not gender or working status of the household head, are significant predictors of food insecurity in the study area. The findings indicate the need to focus scholarly and policy attention on reducing wealth inequalities in tropical deltas in the context of the global sustainable deltas initiative and the proposed Sustainable Development Goals.
Working on a manuscript?
Avoid the most common mistakes and prepare your manuscript for journal editors.
Recent studies reveal that even though the hunger target of the Millennium Development Goal 1 is likely to be within reach (UN 2013), around 12 % of the global population remain deprived of food and one in eight people is suffering from chronic hunger (FAO et al. 2013). Moreover, because of the growing global population and rising consumption, it is estimated that in 2050 the demand for food could increase by more than 70 % (Royal Society 2009; World Bank 2008). Challenges of meeting this rising demand are likely to be exacerbated by long-term environmental changes in agricultural regions, interacting with demographic changes, political instability and natural disasters (Poppy et al. 2014; Smith et al. 2000). Densely populated delta regions, in the Global South in particular, will be at risk of failing to meet their global and national developmental goals, despite the declining trends of food insecurity over the past 20 years in the developing world (FAO et al. 2013; Smith et al. 2000).
Delta regions occupy 1 % of the earth's land area and are home to more than 500 million people (Foufoula-Georgiou et al. 2011; Woodroffe et al. 2006). Because deltas constitute "rice bowls" of the world, deterioration of the tropical megadeltas poses serious threats to food security for more than half of the world's population that relies on rice as a staple food (Hoanh et al. 2010; Pont et al. 2002). Low elevation also makes human settlement in deltas exposed to coastal flooding and storm surges (Syvitski 2008). Deltas are subject to adverse environmental changes principally through human modifications of land use over the past century, notably through rapid deforestation, urbanisation and agricultural development. Moreover, human interventions at a local level, such as dam-induced changes of river flow regime, oil extraction and groundwater extraction, influence the rate of subsidence which in turn contributes to the sinking of deltas. These changes are likely to have negative environmental and social consequences thereby putting human populations at risk of food insecurity. Some of the deltas (e.g., Ganges–Brahmaputra and Yangtze River basin) are already facing the problems of salinisation (Alam 1996) and water quality degradation (Dearing et al. 2014) which not only affects the land use and agriculture productivity of the region, but also the health and well-being of populations and the integrity of socio-ecological systems of deltas. Furthermore, soil and water salinity are projected to increase because of upstream water diversions, sea level rise and climate change (Ericson et al. 2006; Syvitski et al. 2005; Wong et al. 2014).
A number of studies have shown that higher temperatures and sea level rise have a significant effect on soil salinity, in particular in delta regions (Bazzaz et al. 1996; Gornall et al. 2010; Haider and Hossain 2013; Nicholls 2011). Tidal penetration can increase the extent of perennially and seasonally saline soils and diminish soil organic content (Bazzaz et al. 1996). Soil salinity can in turn have a negative effect on production of agricultural crops. Globally, it is expected that incidence of increase and magnitude of extreme high sea level is very likely to continue in the late twenty-first century thus exacerbating the existing threats to human livelihoods (IPCC 2013). Understanding these dynamics affecting food security is critical also in the context of the global sustainable deltas initiative called for by the scientific community (Foufoula-Georgiou et al. 2011, 2013). This initiative aims at generating and sharing knowledge on environmentally vulnerable delta regions and raising awareness of these regions.
There are, of course, well-established associations between food security and households' socio-economic characteristics in other geographical contexts (FAO et al. 2013; Martin et al. 2004; Sraboni et al. 2014). Yet, there is limited evidence regarding these relationships in tropical delta regions despite the crucial role which deltas play in regional and global food supplies (Foufoula-Georgiou et al. 2011; Garschagen et al. 2012).
The present study hypothesises that soil salinity as well as households' socio-economic characteristics have a direct influence on households' food security in rural deltaic environments. More specifically, the first hypothesis states that salinisation is positively associated with household food insecurity. The second hypothesis assumes that there is an association between household's wealth and food security. By undertaking this analysis, the main objective of the present study is to contribute knowledge regarding the determinants of food insecurity in tropical delta regions in the context of the sustainable deltas initiative and a wider sustainable development agenda.
The study area encompasses nine districts across Barisal and Khulna divisions of Bangladesh (Fig. 1). In Khulna division, these districts are Bagerhat, Khulna and Satkhira. In the Barisal division the six districts are Barisal, Barguna, Bhola, Jhalokati, Patuakhali, Pirojpur and Barisal. As per 2011 census data, the overall population of the study area exceeds 14 million and is projected to slightly increase by 2030, if constant rates of fertility, mortality and migration are assumed (Szabo et al. 2015a, b). However, the future size and structure of the population in the region will greatly depend on future migration dynamics (Szabo et al. 2015a, b). Importantly, this densely populated delta is one of the most vulnerable regions to climate change in the world (Milliman et al. 1989). Due to sea level rise, overextraction of groundwater, upstream diversion of surface water and shrimp farming, the coastal Ganges–Brahmaputra delta has been experiencing a relatively rapid increase in groundwater salinity, river salinity and soil salinity (Dasgupta et al. 2014; Ahsan and SDRI Team 2010).
The study area in coastal Bangladesh
Although the coastal zone of Bangladesh is predominantly used for rice cultivation, shrimp farming is also becoming an important source of income in the study area (Chowdhury et al. 2011). Since 1970s, the international demand for shrimps accompanied by relatively high prices for shrimp products triggered increasing conversion of traditional agriculture into shrimp cultivation ponds (Rahman et al. 2013). In addition, the salt tolerance of current rice varieties is between 3 and 12 dS/m (for the dry season Boro rice varieties it is 6–12 dS/m), thus soil salinisation can also force farmers to shift from agriculture to aquaculture. Consequently, many rice fields, dominantly in the Khulna district, have been transformed into shrimp farms ("ghers") and shrimps have become major export commodities (Ali 2006; Rahman et al. 2013). One of the main negative consequences of this changing landscape was increased water and soil salinisation gradually taking place in the region. Shrimp ponds contribute to accelerating depletion of base minerals and make adjacent soils more acid and saline, a process which is difficult to revert (Ali 2006). Between 1970 and 2010, river salinity has increased from 2 to 10 times (Hossain and Dearing 2013; Hossain et al. 2015), whereas soil salinity affected 0.223 million ha (26.7 %) during the same time period. Around 450,000 ha of coastal land were affected by salinity ingress where soil salinity exceeds 8 dS/m (SRDI 2010). Considering the above salt tolerance of rice varieties, this area is likely to be marginally productive, unless good irrigation and land management practices are in place to mitigate the effect of such soil salinity levels.
Importantly, poverty in this region is still a predominantly rural phenomenon, as is the case in other parts of Bangladesh (World Bank 2011), despite an increasing urbanisation of poverty (Planning Commission 2011). Given climate change and environmental vulnerability of the south-west coastal region, there is growing concern that households, in particular those from the poorest segments of the society, would need to develop additional coping strategies to mitigate the current and foreseen food insecurity risks (Faisal and Parveen 2004). In the absence of access to sources of financing, farmers' livelihood strategies are likely to entail not only further conversion to shrimp farming but also increasing out-migration to urban areas, including to regions located outside of the immediate coastal area. Recent data from the 2011 Bangladeshi Population and Housing Census show that in some districts in the study area, including Khulna and Barisal, the population growth rate since the previous decennial census has been negative, indicating high out-migration rates (BBS 2012a, b).
The conceptual framework (Fig. 1) is used to test the study's hypotheses. While the main focus of the framework is on pathways between soil salinisation, household socio-economic characteristics and food security, it is acknowledged that these associations can also be affected by other factors, in the conceptual framework portrayed in Fig. 1. The most important mechanism is the adverse impact of salinisation on provisioning ecosystem services, such as fresh water, food and fibre. These negative impacts can be particularly strong in the absence of an adequate policy and regulatory framework resulting from weak governance structures. For example, river basin management constitutes a critical aspect of natural resource management and allows optimising the productivity of resources in the long run (Montero et al. 2006). Inadequate river basin management can lead to increased salinisation, as was for example the case in the Murray–Darling basin towards the end of the twentieth century (Squires et al. 2014). Climate change, in particular sea level rise, constitutes a threat to agricultural activities in delta regions because of salinisation of surface and ground waters leading to greater soil salinity (Nicholls 2011). Salinisation and thus high levels of soil salinity can affect households' well-being measured by socio-economic indicators. For example, crop damage and changing patterns in crop production linked to salinity intrusion can have an adverse effect on both household livelihood strategies and outcomes.
Concurrently, socio-economic factors, such as households' wealth can have a direct and indirect effect on household food security. Households' wealth and education, which is an indicator of human capital (Goujon and Lutz 2004; Lutz et al. 2008), have been shown as significant determinants of food insecurity in other geographical contexts (Smith and Haddad 2000; Subramanian and Smith 2006) and are included in Fig. 2. Household food security can in turn influence nutritional and health outcomes. It has been established that malnutrition has a negative effect on correct functioning of every organ system, including muscle function and gastrointestinal function (Saunders and Smith 2010). Both household food insecurity and individual health outcomes are contributors to household livelihood outcomes and wider well-being.
Complex mechanisms affect household food security in the coastal Ganges–Brahmaputra delta
In Bangladesh, analysis of secondary data from the 2011 Demographic and Health Survey (DHS) reveals that households in the highest wealth bracket (based on the quintile distribution of their assets) are considerably less likely to suffer from food insecurity compared to poorest households. Based on the indicator of frequency of skipping meals, 82 % of women in Bangladesh responded that they never had to skip a meal, while only 56 % of the poorest females were in the same situation (NIPORT et al. 2013). In addition, the proportion of those skipping meals was higher in rural areas as compared to urban areas (NIPORT et al. 2013). Given the risk of food insecurity linked to salinisation, farmers in the coastal Ganges–Brahmaputra delta have needed to adopt innovative approaches resulting in changing cropping patterns. According to a recent study investigating changing livelihood strategies in the costal delta region, 70 % of interviewed farmers from Patuakhali district stated that their shifts to different crop production were motivated by the potential for increased food security (Islam et al. 2011).
Data and methods
The dataset
This research makes use of the 2010 Household Income and Expenditure Survey (HIES) data as well as upazila (sub-district)-level soil salinity data developed by the Soil Resource Development Institute (Ahsan and SDRI Team 2010). The 2010 HIES followed the standard two-stage stratified random sampling procedure. The integrated multipurpose sample design included 1000 primary sampling units (PSUs) including 640 rural and 360 urban PSUs. In the Barisal division, 980 households have been selected, while in Khulna division there were 1800 sample households (BBS 2011). The analysis in the present study considers a sample of 993 households, all located in the nine rural agriculture-dominated districts of the coastal Ganges–Brahmaputra delta across Khulna and Barisal divisions.
Key variables
Outcome variable
The outcome variable measures household-level food security and is based on food insecurity indicators, proposed by the International Food Policy Research Institute (IFPRI) (Smith and Subandoro 2007). This approach considers two key indicators of food security, firstly the percentage of total household expenditure on food and secondly the daily total calorie availability at the household level. A household is categorised to be food insecure if more than 75 % of its total expenditure is on food items (see also Smith and Subandoro 2007). In addition, a household is classified as food insecure if its daily calorie requirements are higher than total reported energy intake. Taking into account these two variables allow accounting for both availability and access aspects of the food security concept. A final categorisation has been developed based on the combination of the above two variables; a household is categorised as food insecure if at least one of the above conditions has not been met.
Explanatory variables
Key explanatory variables include households' socio-economic characteristics, such as wealth, education, gender and engagement in agricultural activities, and upazila-level soil salinity. In addition, given the volume of remittances in Bangladesh as well as the importance of remittances for livelihoods (Adams and Page 2005; UNCTAD 2012b), a binary variable measuring whether or not a household has been receiving remittances has been incorporated into the model. Households' wealth status has been categorised based on the asset index variable created for the purpose of this study. Although not without their limitations (Falkingham and Namazie 2002), asset indices are widely used in socio-economic analyses to approximate households' wealth. A principal component analysis (PCA) was applied to survey responses on ownership of a set of key assets and the values of the index were based on the first principal component. The list of variables used for the creation of the asset index is provided in the "Appendix". PCA is a commonly used technique when computing asset indices; although traditionally applied to continuous variables, Filmer and Pritchett (2001) argued that it can be a valid method for categorical and binary data such as ownership of assets. Higher scores of the index indicate more affluent households; and households can be ranked from the lowest to the highest asset score and divided into five categories to form asset quintiles. The results of Kaiser–Meyer–Olkin test of sampling adequacy (KMO = 0.67) attested that partial correlations amongst variables were high enough for the PCA to be an adequate method of data reduction in the analysis.
The household-level dataset is complemented by upazila-level soil salinity data published by Ahsan and SDRI Team (2010). This report contains field observation-based (peak) soil salinity data for 2009 for all 70 upazilas. Detailed information regarding the quality of the soil salinity data can be found in the methods section of the same report. This information enables a spatial differentiation of the salinisation problem within the coastal delta region. In the present study, two main indicators of salinisation are considered. Firstly, the extent of salinity affected areas was calculated as the percentage of saline area (2 dS/m or more) in each upazila. Secondly, a weighted average salinity score (i.e. concentration) was calculated from the soil salinity data (measured as dS/m).
To test the hypotheses, the study uses econometric methods, including descriptive statistics and regression modelling. To compare mean salinity scores and salinity area amongst food secure and food insecure households, one-way ANOVA tests were used. Complementarily, assessing the impact of households' socio-economic status on food security outcomes was conducted by means of χ2 statistics.
Because the outcome variable is binary, a series of logistic regression models were applied. The results of both unadjusted models and models which control for selected confounding factors are reported and discussed. First, the relationship between salinity affected area and households' food security is examined. Then, selected socio-economic characteristics (not including wealth quintiles) are added. The third model controls additionally for households' wealth status. The fourth and fifth models represent the unadjusted and adjusted relationships, respectively, between weighted salinity score and absence or presence of food insecurity in a household.
The following equation was estimated to examine the unadjusted relationship between household food insecurity and salinity intrusion:
$${\text{logit}}(Y_{i} ) = \beta_{0} + \beta_{1} X_{i} + \varepsilon_{i} \quad \quad {\text{where}}, \, i = 1, 2 ,\ldots ,{\text{n,}}$$
where Y i denotes household food insecurity status with a values 0 or 1 (0 = food secure, 1 = food insecure), \(\beta_{0}\) is a constant, X i indicates salinity score, \(\beta_{1}\) is the coefficient that shows the magnitude and direction of relationship with Y i and \(\varepsilon_{i}\) means error term.
The adjusted models with control variables were specified as follows:
$${\text{logit}}(Y_{i} ) = \beta_{0} + \beta_{1} X_{1i} + \beta_{2} X_{2i} + \beta_{3} X_{3i} + \beta_{4} X_{4i} + \cdots + \varepsilon_{i} ;\quad i = 1, 2 ,\ldots ,{\text{n,}}$$
where Y i denotes food insecurity status with values 0 or 1 (0 = food secure, 1 = food insecure), \(\beta_{0}\) is a constant, X 1i indicates soil salinity, \(\beta_{1}\) is the coefficient that shows the magnitude and direction of relationship with Y i . X 2i , X 3i , X 4i ,… denote the control variables, for example, socio-economic characteristics, wealth quintiles and the characteristics of household's head. \(\beta_{2} , \beta_{3} ,\beta_{4} \ldots\) denote adjacent coefficients to the corresponding variables and \(\varepsilon_{i}\) means error term.
The results of logistic regression are interpreted using odds ratios (OR) and associated confidence intervals (CI). An OR measures the odds of an outcome accounting for the effect of a selected explanatory variable compared with the odds of the outcome without exposure to such effect (Szumilas 2010). Confidence intervals indicate the range of plausible values for estimated ORs (Katz 2003). Standard post-estimation tests are applied to evaluate model fit and facilitate model selection. These include the likelihood ratio (LR) test, Bayesian information criterion (BIC) and Akaike Information Criterion (AIC). The results of these tests are reported in Table 2.
Table 1 provides an overview of descriptive statistics of key variables used in the analysis. As can be observed, a considerable proportion of population is food insecure, with almost 44.7 % of all households spending 75 % or more of their total expenditure on food and around 33.2 % having insufficient daily energy intake. In terms of salinity intrusion, the average percentage of saline area in each upazila is approximately 40 % and the overall weighted average of soil salinity is 3.62 dS/m. When considering socio-economic characteristics of households in the study area, it can be noticed that the majority of households is engaged in agricultural activities. More specifically, 81.7 % of households reported raising livestock, while 51.5 % were engaged in crop cultivation. The average age of household head was 47.6 and the average years of education of household head was 3.6. Importantly, 16.2 % of all households reported receiving either international or domestic remittances.
Table 1 Descriptive statistics of key variables in the analysis
Regression results
The results of regression modelling of household food insecurity are reported in Table 2. Sequential variable selection routine was applied to first test the impact of soil salinity. Overall, the results confirm research hypotheses although certain nuances should be noticed. More specifically, the results of the unadjusted model 1 suggest that a significant positive association (p < 0.05) exists between soil salinity and household food insecurity. When additional confounding variables are added (model 2), the impact of soil salinity remains significant, although only at 10 % significance level. As expected, the education level of the household head and involvement in agricultural activities are negatively associated with household food insecurity. In particular, education, which is an indicator of human capital (Goujon and Lutz 2004; Lutz and Goujon 2001), remains a strong predictor of food security across all models (in model 2: OR = 0.90, in model 3: OR = 0.93).
Table 2 Regression results with household food insecurity as the outcome variable
Model 3 incorporates the effect of household wealth approximated by asset index. The impact of household wealth is strong; in particular when considering richest strata of the society (top three wealth quintiles are highly significant). Based on the results of model 3, ceteris paribus, in the study area, the odds of being food insecure for the richest households are approximately 0.26 times the odds for poorest households. As expected, household size is positively associated with food insecurity (p < 0.05), thus confirming traditional Malthusian claims regarding population pressure on natural resources. In addition, involvement in agricultural activities, especially raising livestock has an attenuating effect on household food insecurity.
An interesting and important result is that related to the impact of remittances. As highlighted previously, Bangladesh is the main receiver of remittances amongst the LDCs (UNCTAD 2012a), which is likely to affect positively well-being of receiving household members. Based on the results of model 3, the odds of being food insecure for households which have been receiving remittances are around 0.63 times the odds of being food insecure for households which have not been receiving any remittances. To explore further this effect, we performed a separate test using an unadjusted model with remittances as the only explanatory variable. The results of this model (unreported) suggested that when no other controlling factors are accounted for, the impact of receiving remittance is even stronger (OR = 0.45, p < 0.01). The results of LR, AIC and BIC tests suggest that model 3, which incorporates household wealth and other socio-economic characteristics, performs best and thus should be considered the most appropriate model amongst the first three.
As outlined in the "Data and methods" section, the study also tested for the effect of an alternative indicator of soil salinity based on a weighted average. The results including this variable are reported in models 4 and 5. This approach allowed validating the results reported in models 1–3. As can be seen, in an unadjusted model, soil salinity (i.e. weighted salinity score) is statistically significant (albeit only at 10 %). However, when other confounding factors are taken into account, in particular households' assets, revenue from remittances and education, soil salinity is no longer statistically significant. As was the case in model 3, wealth, education and remittances are strongest predictors of food insecurity. Moreover, gender, approximated by the sex of household head is not statistically significant in either of the models. Finally, when considering the results of the LR tests and the values of BIC and AIC, it can be concluded that model 5 performs best and should thus be the preferred model.
This study assessed the impact of soil salinity and household socio-economic characteristics on food security. It tested hypotheses that soil salinity is negatively associated with household food security and that households' wealth has a positive effect on food security. The results of the present study are in line with the existing evidence pertaining to the negative impact of salinity on household food security (Parvin and Ahsan 2013). Importantly, the findings, however, show that the introduction of socio-economic characteristics, in particular household wealth, alters the nature of the association between salinity and household food security. The results suggest that household wealth, education and remittances are the most important predictors of household food security. These results complement the finding by Akter and Basher (2014) that rises in food prices have a disproportionate short-term effect on the poorest segments of the society in rural Bangladesh. The findings also highlight the importance of emerging research on migration and food security in developing countries (Azzarri and Zezza 2011; Zezza et al. 2011) and the need to further disentangle the pathways through which remittances affect micro- and macro-level food security.
Overall, the results show that salinisation of soil, as an example of long-term environmental degradation, is an important exacerbating risk, albeit well-established social determinants of food security remain crucial in addressing micro-level risks of food insecurity. Therefore, the results of the present study confirm existing research investigating similar questions. For example, a relatively recent study based on the analysis of 2005 HIES data showed that both education and wealth were significant predictors of household food security in Bangladesh (Faridi and Wadood 2010). With regard to the presupposed impact of household involvement in agricultural activities, similar findings were reported in a paper investigating nutritional and food security status in Dinajpur in northern Bangladesh. The authors found that crop cultivation and raising livestock were not associated with food security, although the models did not control for households' wealth status (Hillbruner and Egan 2008). Finally, the insignificant effect of gender of household head resonate with the findings by Mallick and Rafi (2010) who showed that female-headed households were not significantly more insecure compared to male-headed households. This result could be explained, at least partially, by the presence of informal distributive mechanisms in Bangladesh (Mallick and Rafi 2010).
While the present study advances the scientific understanding of the determinants of food security in salinity-threatened areas, there are limitations. First, there are additional elements of salinity on well-being which are unaccounted for here. Soil salinity is affected by many external factors, including seasonality and natural hazards (Brammer 2014) and it affects well-being indirectly, through its impact on health, with those impacts being highly seasonal (Brainerd and Menon 2014). Second, environmental changes related to seasonality affect the availability of substitute income sources and informal food sources on food security at the household level, though these are difficult to capture. It is clear, for example, that shrimp collection, forest products and other food sources are important sources of nutrition for landless households at specific times of the year (Arnold et al. 2011). There is certainly evidence from Bangladesh that many ecosystem services from agriculture and delta ecosystems such as mangroves are directly affected by short-term stresses, including cyclones and storms, which interact with longer term processes, such as salinity intrusion (Shameem et al. 2014; Uddin et al. 2013). As highlighted previously, a final limitation is related to the fact that salinity is measured at upazila level, which implies that temporal and spatial inter-cluster variations are likely to exist with respect to the degree of soil salinisation. While we acknowledge that within upazila differences are likely to exist, the analysis carried out in this paper aimed to quantify the impact of aggregated soil salinity. Such an approach is important in terms of providing an overview of cross-level associations between soil salinity and food security, and consequently developing relevant policy measures. A wide body of social and environmental research recognised the significance of aggregated level data at both global (Rockstrom et al. 2009) and meso-scales (Dearing et al. 2014) and results of these studies yielded important policy implications.
From the policy perspective, it should be stressed that several official policy documents, including Perspective Plan of Bangladesh 2010–2021 (Planning Commission 2012) and Poverty Reduction Strategy (IMF 2003) explicitly state the goals to achieve universal food security in the country. Given the results here, it is crucial to recognise stark wealth-based inequalities in households' food security in the rural Ganges–Brahmaputra delta region. With the likely increasing impact of climate change on livelihoods in tropical deltas, it is important to link both environmental and social development strategies, recognising the role that specific creeping processes, may have in food production and distribution. In this regard, the proposed Sustainable Development Goals (SDGs) constitute a move in the right direction because of the increased focus on the developmental impacts of environmental and climate change and the emphasis on societal inequalities (UN 2014; UNSC 2015). In addition, the SDGs recognise the need for resilient agricultural practices and building resilience of the poor, which is particularly relevant to tropical delta regions (Szabo et al. 2015a, b; UN 2014). Future research should therefore consider explicitly the cross-level interlinkages between socio-economic and environmental impacts on food security in the context of tropical deltas.
Adams RH, Page J (2005) Do international migration and remittances reduce poverty in developing countries? World Dev 33(10):1645–1669. doi:10.1016/j.worlddev.2005.05.004
Ahsan M, SDRI Team (2010). Coastal saline soils of Bangladesh (trans: S. Project). Soil Resource Development Institute (SRDI), Dhaka
Akter S, Basher SA (2014) The impacts of food price and income shocks on household food security and economic well-being: evidence from rural Bangladesh. Glob Environ Change 25:150–162. doi:10.1016/j.gloenvcha.2014.02.003
Alam M (1996) Subsidence of the Ganges–Brahmaputra Delta of Bangladesh and associated drainage, sedimentation and salinity problems. In: Milliman J, Haq BU (eds) Sea-level rise and coastal subsidence: causes, consequences, and strategies. Springer, Berlin, pp 169–192
Ali AMS (2006) Rice to shrimp: land use land cover changes and soil degradation in Southwestern Bangladesh. Land Use Policy 23(4):421–435. doi:10.1016/j.landusepol.2005.02.001
Arnold M, Powell B, Shanley P, Sunderland TCH (2011) EDITORIAL: forests, biodiversity and food security. Int For Rev 13(3):259–264
Azzarri C, Zezza A (2011) International migration and nutritional outcomes in Tajikistan. Food Policy 36(1):54–70. doi:10.1016/j.foodpol.2010.11.004
Bazzaz FA, Sombroek WG, Food and Agriculture Organization of the United Nations (1996) Global climate change and agricultural production: direct and indirect effects of changing hydrological, pedological, and plant physiological processes. Food and Agriculture Organization of the United Nations (FAO), Rome
BBS (2011) Report of the household income and expenditure survey 2010. Bangladesh Bureau of Statistics (BBS), Dhaka
BBS (2012a) Community report. Barisal Zila. Dhaka, Bangladesh Bureau of Statistics (BBS)
BBS (2012b) Community report. Khulna Zila. Dhaka, Bangladesh Bureau of Statistics (BBS)
Brainerd E, Menon N (2014) Seasonal effects of water quality: the hidden costs of the Green Revolution to infant and child health in India. J Dev Econ 107:49–64. doi:10.1016/j.jdeveco.2013.11.004
Brammer H (2014) Bangladesh's dynamic coastal regions and sea-level rise. Clim Risk Manag 1:51–62
Chowdhury MA, Khairun Y, Salequzzaman M, Rahman MM (2011) Effect of combined shrimp and rice farming on water and soil quality in Bangladesh. Aquacult Int 19(6):1193–1206. doi:10.1007/s10499-011-9433-0
Dasgupta S, Kamal FA, Khan ZH, Choudhury S, Nishat A (2014) River salinity and climate change: evidence from coastal Bangladesh. The World Bank. http://econ.worldbank.org/external/default/main?pagePK=64165259&theSitePK=469372&piPK=64165421&menuPK=64166093&entityID=000158349_20140326150636
Dearing JA, Wang R, Zhang K, Dyke JG, Haberle H, Hossain S, Poppy GM (2014) Safe and just operating spaces for regional social–ecological systems. Glob Environ Change 28:227–238
Ericson JP, Vorosmarty CJ, Dingman SL, Ward LG, Meybeck M (2006) Effective sea-level rise and deltas: causes of change and human dimension implications. Glob Planet Change 50(1–2):63–82. doi:10.1016/j.gloplacha.2005.07.004
Faisal IM, Parveen S (2004) Food security in the face of climate change, population growth, and resource constraints: implications for Bangladesh. Environ Manag 34(4):487–498. doi:10.1007/s00267-003-3066-7
Falkingham J, Namazie C (2002) Measuring health and poverty: a review of approaches to identifying the poor. Department for International Development (DFID), London, p 70
FAO, IFAD, WFP (2013) The state of food insecurity in the world 2013. The multiple dimensions of food security. Food and Agriculture Organization of the United Nations (FAO), International Fund for Agricultural Development (IFAD) & World Food Programme (WFP), Rome, p 56
Faridi R, Wadood SN (2010) An econometric assessment of household food security in Bangladesh. Bangladesh Dev J XXXIII(3):99–111
Filmer D, Pritchett LH (2001) Estimating wealth effects without expenditure data—or tears: an application to educational enrollments in states of India. Demography 38(1):115–132. doi:10.2307/3088292
Foufoula-Georgiou E, Syvitski J, Paola C, Hoanh CT, Tuong P, Vörösmarty C, Twilley R (2011) International year of deltas 2013: a proposal. Eos Trans Am Geophys Union 92(40):340–341. doi:10.1029/2011EO400006
Foufoula-Georgiou E, Overeem I, Saito Y, Dech S, Kuenzer C, Goodbred S, Twilley R (2013) A vision for a coordinated international effort on delta sustainability. Deltas Landf Ecosyst Hum Act 358:3–11
Garschagen M, Revilla Diez J, Kieu Nhan D, Kraas F (2012) Socio-economic development in the Mekong Delta: between the prospects for progress and the realms of reality. In: Renaud FG, Kuenzer C (eds) The Mekong Delta system: interdisciplinary analyses of a river delta. Springer, Berlin, pp 83–132
Gornall J, Betts R, Burke E, Clark R, Camp J, Willett K, Wiltshire A (2010) Implications of climate change for agricultural productivity in the early twenty-first century. Philos Trans R Soc Lond B Biol Sci 365(1554):2973–2989. doi:10.1098/rstb.2010.0158
Goujon A, Lutz W (2004) Future Human Capital: Population Projections by Level of Education. In: Lutz W, Sanderson W (eds) The end of world population growth, human capital and sustainable development in the 21st century. Earthscan, London, pp 121–157
Haider MZ, Hossain MZ (2013) Impact of salinity on livelihood strategies of farmers. J Soil Sci Plant Nutr 13(2):417–431. doi:10.4067/S0718-95162013005000033
Hillbruner C, Egan R (2008) Seasonality, household food security, and nutritional status in Dinajpur, Bangladesh. Food Nutr Bull 29(3):221–231
Hoanh CT, International Water Management Institute, WorldFish Center, International Rice Research Institute, FAO Regional Office for Asia and the Pacific, Challenge Program on Water and Food (2010) Tropical deltas and coastal zones : food production, communities and environment at the land–water interface. CABI, Wallingford
Hossain S, Dearing JA (2013) Recent trends of ecosystem services and human wellbeing in the Bangladesh delta. ESPA Deltas Working Paper #3. http://www.espadelta.net/resources/docs/working_papers/ESPA_Deltas_FT3_June%202013.pdf
Hossain MS, Dearing JA, Rahman MM, Salehin M (2015) Recent changes in ecosystem services and human wellbeing in the coastal zone. Reg Environ Change 1–15. doi:10.1007/s10113-014-0748-z
IMF (2003) Bangladesh: interim poverty reduction strategy paper. International Monetary Fund (IMF), p 136
IPCC (2013) Climate change 2013: the physical science basis : Working Group I contribution to the fifth assessment report of the Intergovernmental Panel on Climate Change. Cambridge University Press, Cambridge
Islam MB, Ali MY, Amin M, Zaman SM (2011) Climate variations: farming systems and livelihoods in the high barind tract and coastal areas of Bangladesh. In: Lal R, Sivakumar MVK, Rahman AHMM, Islam KR (eds) Climate change and food security in South Asia. Springer
Katz MH (2003) Multivariable analysis: a primer for readers of medical research. Ann Intern Med 138(8):644–650
Lutz W, Goujon A (2001) The world's changing human capital stock: multi-state population projections by educational attainment. Popul Dev Rev 27(2):323–339
Lutz W, Goujon A, Wils A (2008) The population dynamics of human capital accumulation. Popul Dev Rev 34:149–187
Mallick D, Rafi M (2010) Are female-headed households more food insecure? Evidence from Bangladesh. World Dev 38(4):593–605. doi:10.1016/j.worlddev.2009.11.004
Martin KS, Rogers BL, Cook JT, Joseph HM (2004) Social capital is associated with decreased risk of hunger. Soc Sci Med 58(12):2645–2654. doi:10.1016/j.socscimed.2003.09.026
Milliman JD, Broadus JM, Gable F (1989) Environmental and economic-implications of rising sea-level and subsiding deltas—the Nile and Bengal examples. Ambio 18(6):340–345
Montero S, Castellon ES, Rivera LMM, Ruvalcaba SG, Llamas JJ (2006) Collaborative governance for sustainable water resources management: the experience of the Inter-municipal Initiative for the Integrated Management of the Ayuquila River Basin, Mexico. Environ Urban 18(2):297–313. doi:10.1177/0956247806069602
Nicholls RJ (2011) Planning for the impacts of sea level rise. Oceanography 24(2):144–157
NIPORT, Mitra and Associates, ICF International (2013) Bangladesh demographic and health survey 2011. Dhaka, Bangladesh and Calverton, Maryland, USA
Parvin GA, Ahsan SMR (2013) Impacts of climate change on food security of rural poor women in Bangladesh. Manag Environ Qual 24(6):802–814. doi:10.1108/MEQ-04-2013-0033
Planning Commission (2011) Perspective plan of Bangladesh 2010–2021. Making vision 2021 a reality: general economics division. Planning Commission. Government of the People's Republic of Bangladesh
Planning Commission (2012) Perspective plan for Bangladesh 2010–2021. Making vision 2021 a reality. Planning Commission, Dhaka
Pont D, Day JW, Hensel P, Franquet E, Torre F, Rioual P et al (2002) Response scenarios for the deltaic plain of the Rhone in the face of an accelerated rate of sea-level rise with special attention to Salicornia-type environments. Estuaries 25(3):337–358
Poppy GM, Chiotha S, Eigenbrod F, Harvey CA, Honzak M, Hudson MD et al (2014) Food security in a perfect storm: using the ecosystem services framework to increase understanding. Philos Trans R Soc B Biol Sci. doi:10.1098/Rstb.2012.0288
Rahman M, Giedraitis VR, Lieberman LS, Akhtar T, Taminskienė V (2013) Shrimp Cultivation with Water Salinity in Bangladesh: the Implications of an Ecological Model. Univers J Public Health 1(3):131–142. doi:10.13189/ujph.2013.010313
Rockstrom J, Steffen W, Noone K, Persson A, Chapin FS, Lambin EF, Foley JA (2009) A safe operating space for humanity. Nature 461(7263):472–475. doi:10.1038/461472a
Royal Society (2009) Reaping the benefits: science and the sustainable intensification of global agriculture. Royal Society, London
Saunders J, Smith T (2010) Malnutrition: causes and consequences. Clin Med 10(6):624–627
Shameem MI, Momtaz S, Rauscher R (2014) Vulnerability of rural livelihoods to multiple stressors: a case study from the southwest coastal region of Bangladesh. Ocean Coast Manag 102:79–87
Smith LC, Haddad L (2000) Explaining child malnutrition in developing countries: a cross-country analysis. International Food Policy Research Institute (IFPRI), Washington, DC
Smith LC, Subandoro A (2007) Measuring food security using household expenditure surveys. International Food Policy Research Institute, Washington, D.C.
Smith LC, El Obeid AE, Jensen HH (2000) The geography and causes of food insecurity in developing countries. Agric Econ 22(2):199–215. doi:10.1016/S0169-5150(99)00051-1
Squires VR, Milner HM, Daniell KA (2014) River basin management in the twenty-first century : understanding people and place. CRC Press, Taylor & Francis Group, Boca Raton
Sraboni E, Malapit JM, Quisumbing AR, Ahmed AU (2014) Women's empowerment in agriculture: what role for food security in Bangladesh? World Dev 61:11–52
Subramanian SV, Smith GD (2006) Patterns, distribution, and determinants of under- and overnutrition: a population-based study of women in India. Am J Clin Nutr 84(3):633–640
Syvitski JPM (2008) Deltas at risk. Sustain Sci 3(1):23–32. doi:10.1007/s11625-008-0043-3
Syvitski JPM, Harvey N, Wolanski E, Burnett WC, Perillo GME, Gornitz V et al (2005) Dynamics of the coastal zone. In: Crossland CJ, Kremer HH, Lindeboom HJ, Crossland JIM, Tissier MDAL (eds) Coastal fluxes in the anthropocene. Springer, Berlin
Szabo S, Begum D, Ahmad S, Matthews Z, Steatfield PK (2015a). Scenarios of population change in the coastal Ganges Brahmaputra Delta (2011–2051). http://www.cpc.ac.uk/publications/cpc_working_papers.php
Szabo S, Renaud F, Hossain MS, Sebesvari Z, Matthews Z, Foufoula-Georgiou E, Nicholls RJ (2015b) New opportunities for tropical delta regions offered by the proposed sustainable development goals. Environ Sci Policy Sustain Dev 57(4):16–23
Szumilas M (2010) Explaining odds ratios. J Can Acad Child Adolesc Psychiatry 19(3):227–229
Uddin S, de Ruyter van Steveninck E, Stuip M, Shah MAR (2013) Economic valuation of provisioning and cultural services of a protected mangrove ecosystem: a case study on Sundarbans Reserve Forest, Bangladesh. Ecosyst Serv 5:88–93
UN (2013) The millennium development goals report 2013. United Nations (UN), New York
UN (2014) Open working group proposal for sustainable development goals. https://sustainabledevelopment.un.org/content/documents/1579SDGs%20Proposal.pdf
UNCTAD (2012a) The least developed countries report 2012. Harnessing remittances and diaspora knowledge to build productive capacities. United Nations Conference on Trade and Development (UNCTAD), New York, p 192
UNCTAD (2012b) The least developed countries report. Harnessing remittances and diaspora knowledge to build productive capacities. Retrieved 03 July 2014. http://unctad.org/en/PublicationsLibrary/ldc2012_en.pdf
UNSC (2015) Technical report by the Bureau of the United Nations Statistical Commission (UNSC) on the process of the development of an indicator framework for the goals and targets of the post-2015 development agenda. United Nations United Nations Statistical Commission (UNSC), p 44
Wong PP, Losada IJ, Gattuso J-P, Hinkel J, Khattabi A, McInnes KL et al (2014) Coastal systems and low-lying areas. In: Field CB, Barros VR, Dokken DJ, Mach KJ, Mastrandrea MD, Bilir TE, Chatterjee M, Ebi KLKL, Estrada YO, Genova RC, Girma BB, Kissel ES, Levy AN, MacCracken S, Mastrandrea PR, White LL (eds) Climate change 2014: impacts, adaptation, and vulnerability. Part A: global and sectoral aspects. contribution of working group ii to the fifth assessment report of the intergovernmental panel on climate change. Cambridge University Press, Cambridge, pp 361–409
Woodroffe CD, Nicholls RJ, Saito Y, Chen Z, Goodbred SL (2006) Landscape variability and the response of Asian megadeltas to environmental change. In: Harvey N (ed) Global change and integrated coastal management, vol 10. Springer, Dordrecht, pp 277–314
World Bank (2008) World development report 2008: agriculture for development. World Bank; Eurospan distributor, Washington, D.C.; London
World Bank (2011) Bangladesh: priorities for agriculture and rural development. Retrieved 02 July 2014. http://web.worldbank.org/WBSITE/EXTERNAL/COUNTRIES/SOUTHASIAEXT/EXTSAREGTOPAGRI/0,,contentMDK:20273763~menuPK:548213~pagePK:34004173~piPK:34003707~theSitePK:452766,00.html
Zezza A, Carletto C, Davis B, Winters P (2011) Assessing the impact of migration on food and nutrition security. Food Policy 36(1):1–6. doi:10.1016/j.foodpol.2010.11.005
This work was supported by the ESPA Deltas project (Grant No. NE/J002755/1) and the Belmont Forum DELTAS project (Grant No. NE/L008726/1). The Ecosystem Services for Poverty Alleviation (ESPA) programme is funded by the Department for International Development (DFID), the Economic and Social Research Council (ESRC) and the Natural Environment Research Council (NERC). The Belmont Forum DELTAS project is co-funded by the Natural Environment Research Council (NERC). Md. Sarwar Hossain acknowledges financial support provided by a joint NERC/ESRC interdisciplinary PhD studentship award and the University of Southampton.
Division of Social Statistics and Demography, Faculty of Social and Human Sciences, University of Southampton, Highfield Campus, Southampton, SO17 1BJ, UK
Sylvia Szabo & Zoe Matthews
Department of Geography and Environment, University of Southampton, Southampton, UK
Md. Sarwar Hossain
Geography, College of Life and Environmental Sciences, University of Exeter, Exeter, UK
W. Neil Adger
International Centre for Diarrhoeal Disease Research, Bangladesh (ICDDR,B), Dhaka, Bangladesh
Sayem Ahmed & Sate Ahmad
Department of Engineering and the Environment, University of Southampton, Southampton, UK
Attila N. Lázár
Sylvia Szabo
Zoe Matthews
Sayem Ahmed
Sate Ahmad
Correspondence to Sylvia Szabo.
Handled by G. Kranjac-Bersavljevic, University for Development Studies, Ghana.
Appendix: Variables used in the principal component analysis
See Table 3.
Table 3 Variables used in PCA
Szabo, S., Hossain, M.S., Adger, W.N. et al. Soil salinity, household wealth and food insecurity in tropical deltas: evidence from south-west coast of Bangladesh. Sustain Sci 11, 411–421 (2016). https://doi.org/10.1007/s11625-015-0337-1
Issue Date: May 2016
Soil salinisation
Wealth inequalities
Ganges–Brahmaputra delta
Sustainable deltas
Over 10 million scientific documents at your fingertips
Switch Edition
Not affiliated
© 2023 Springer Nature Switzerland AG. Part of Springer Nature.
|
CommonCrawl
|
Works by Ronald Fagin
( view other items matching `Ronald Fagin`, view all matches )
See also See also:
Ronald Fagin
Reasoning About Knowledge.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Vardi - 1995 - MIT Press.details
Reasoning About Knowledge is the first book to provide a general discussion of approaches to reasoning about knowledge and its applications to distributed ...
Doxastic and Epistemic Logic in Logic and Philosophy of Logic
Reasoning in Epistemology
$11.98 used $39.95 new $54.00 direct from Amazon Amazon page
Bookmark 314 citations
Probabilities on Finite Models.Ronald Fagin - 1976 - Journal of Symbolic Logic 41 (1):50-58.details
What is an Inference Rule?Ronald Fagin, Joseph Y. Halpern & Moshe Y. Vardi - 1992 - Journal of Symbolic Logic 57 (3):1018-1045.details
What is an inference rule? This question does not have a unique answer. One usually finds two distinct standard answers in the literature; validity inference $(\sigma \vdash_\mathrm{v} \varphi$ if for every substitution $\tau$, the validity of $\tau \lbrack\sigma\rbrack$ entails the validity of $\tau\lbrack\varphi\rbrack)$, and truth inference $(\sigma \vdash_\mathrm{t} \varphi$ if for every substitution $\tau$, the truth of $\tau\lbrack\sigma\rbrack$ entails the truth of $\tau\lbrack\varphi\rbrack)$. In this paper we introduce a general semantic framework that allows us to investigate the notion of inference (...) more carefully. Validity inference and truth inference are in some sense the extremal points in our framework. We investigate the relationship between various types of inference in our general framework, and consider the complexity of deciding if an inference rule is sound, in the context of a number of logics of interest: classical propositional logic, a nonstandard propositional logic, various propositional modal logics, and first-order logic. (shrink)
Monadic Generalized Spectra.Ronald Fagin - 1975 - Mathematical Logic Quarterly 21 (1):89-96.details
Generalized Quantifiers in Philosophy of Language
Reachability is Harder for Directed Than for Undirected Finite Graphs.Miklos Ajtai & Ronald Fagin - 1990 - Journal of Symbolic Logic 55 (1):113-150.details
Although it is known that reachability in undirected finite graphs can be expressed by an existential monadic second-order sentence, our main result is that this is not the case for directed finite graphs (even in the presence of certain "built-in" relations, such as the successor relation). The proof makes use of Ehrenfeucht-Fraisse games, along with probabilistic arguments. However, we show that for directed finite graphs with degree at most k, reachability is expressible by an existential monadic second-order sentence.
Mathematical Logic in Formal Sciences
A Quantitative Analysis of Modal Logic.Ronald Fagin - 1994 - Journal of Symbolic Logic 59 (1):209-252.details
We do a quantitative analysis of modal logic. For example, for each Kripke structure M, we study the least ordinal μ such that for each state of M, the beliefs of up to level μ characterize the agents' beliefs (that is, there is only one way to extend these beliefs to higher levels). As another example, we show the equivalence of three conditions, that on the face of it look quite different, for what it means to say that the agents' (...) beliefs have a countable description, or putting it another way, have a "countable amount of information". The first condition says that the beliefs of the agents are those at a state of a countable Kripke structure. The second condition says that the beliefs of the agents can be described in an infinitary language, where conjunctions of arbitrary countable sets of formulas are allowed. The third condition says that countably many levels of belief are sufficient to capture all of the uncertainty of the agents (along with a technical condition). The fact that all of these conditions are equivalent shows the robustness of the concept of the agents' beliefs having a "countable description". (shrink)
Modal and Intensional Logic in Logic and Philosophy of Logic
Common Knowledge Revisited.Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1999 - Annals of Pure and Applied Logic 96 (1-3):89-105.details
Epistemic Logic in Logic and Philosophy of Logic
A Spectrum Hierarchy.Ronald Fagin - 1975 - Mathematical Logic Quarterly 21 (1):123-134.details
Reasoning About Knowledge: A Response by the Authors. [REVIEW]Ronald Fagin, Joseph Y. Halpern, Yoram Moses & Moshe Y. Vardi - 1997 - Minds and Machines 7 (1):113-113.details
I'm OK If You're OK: On the Notion of Trusting Communication. [REVIEW]Ronald Fagin & Joseph Y. Halpern - 1988 - Journal of Philosophical Logic 17 (4):329 - 354.details
We consider the issue of what an agent or a processor needs to know in order to know that its messages are true. This may be viewed as a first step to a general theory of cooperative communication in distributed systems. An honest message is one that is known to be true when it is sent (or said). If every message that is sent is honest, then of course every message that is sent is true. Various weaker considerations than honesty (...) are investigated with the property that provided every message sent satisfies the condition, then every message sent is true. (shrink)
Comparing the Power of Games on Graphs.Ronald Fagin - 1997 - Mathematical Logic Quarterly 43 (4):431-455.details
The descriptive complexity of a problem is the complexity of describing the problem in some logical formalism. One of the few techniques for proving separation results in descriptive complexity is to make use of games on graphs played between two players, called the spoiler and the duplicator. There are two types of these games, which differ in the order in which the spoiler and duplicator make various moves. In one of these games, the rules seem to be tilted towards favoring (...) the duplicator. These seemingly more favorable rules make it easier to prove separation results, since separation results are proven by showing that the duplicator has a winning strategy. In this paper, the relationship between these games is investigated. It is shown that in one sense, the two games are equivalent. Specifically, each family of graphs used in one game to prove a separation result can in principle be used in the other game to prove the same result. This answers an open question of Ajtai and the author from 1989. It is also shown that in another sense, the games are not equivalent, in that there are situations where the spoiler requires strictly more resources to win one game than the other game. This makes formal the informal statement that one game is easier for the duplicator to win. (shrink)
Theory in Economics in Philosophy of Social Science
Review: Ronald Fagin, Moshe Y. Vardi, Knowledge and Implicit Knowledge in a Distributed Environment: Preliminary Report.William J. Rapaport, Ronald Fagin & Moshe Y. Vardi - 1988 - Journal of Symbolic Logic 53 (2):667.details
Fifth Conference on Theoretical Aspects of Reasoning About Knowledge (TARK V).Ronald Fagin - 1993 - Journal of Logic, Language, and Information 2 (338).details
A Two‐Cardinal Characterization of Double Spectra.Ronald Fagin - 1975 - Mathematical Logic Quarterly 21 (1):121-122.details
|
CommonCrawl
|
Skip to main content Skip to sections
Advances in Therapy
December 2019 , Volume 36, Issue 12, pp 3458–3470 | Cite as
Efficacy and Safety of 8 Weeks of Glecaprevir/Pibrentasvir in Treatment-Naïve, HCV-Infected Patients with APRI ≤ 1 in a Single-Arm, Open-Label, Multicenter Study
Robert J. Fontana
Sabela Lens
Stuart McPherson
Magdy Elkhashab
Victor Ankoma-Sey
Mark Bondin
Ana Gabriela Pires dos Santos
Zhenyi Xue
Roger Trinh
Ariel Porcalla
Stefan Zeuzem
First Online: 23 October 2019
The presence or absence of cirrhosis in patients with chronic hepatitis C virus (HCV) infection influences the type and duration of antiviral therapy. Non-invasive markers, like serum aspartate aminotransferase (AST) to platelet ratio index (APRI), may help identify appropriate HCV treatment-naive patients for 8-week treatment with the pangenotypic regimen of glecaprevir/pibrentasvir.
This single-arm, open-label, international, prospective study (NCT03212521) evaluated the efficacy and safety of 8-week glecaprevir/pibrentasvir regimen in HCV treatment-naïve adults with chronic HCV genotypes 1–6 infection, APRI ≤ 1, and no prior evidence of cirrhosis. The primary and secondary outcomes were sustained virologic response at 12 weeks post-treatment (SVR12) by modified intent-to-treat (mITT) and intent-to-treat (ITT) analyses, respectively. Additional endpoints included virologic failures, treatment adherence, and genotype-specific SVR12 rates.
Among the 230 patients enrolled, most were less than 65 years old (90%); 37% and 43% had a history of injection drug use or psychiatric disorders, respectively. SVR12 rates were 100% (222/222; 95% CI 98.3–100%) and 96.5% (222/230; 95% CI 94.2–98.9%) by mITT and ITT analyses, respectively. There were no virologic failures. ITT SVR12 rates were greater than 94% for all HCV genotypes. In patients with available data, treatment adherence was 99% (202/204). There were no grade 3 or higher laboratory abnormalities in alanine aminotransferase (ALT), aspartate aminotransferase (AST), and total bilirubin, and low rates of serious adverse events (2%).
Glecaprevir/pibrentasvir was highly efficacious and well tolerated in HCV treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis.
Trial Registration
ClinicalTrials.gov number, NCT03212521.
AbbVie.
Plain Language Summary
Plain language summary available for this article.
Chronic hepatitis C Direct acting antiviral Glecaprevir/pibrentasvir Infectious diseases Simplification
Adverse event
Alanine aminotransferase
Aspartate aminotransferase to platelet ratio index
Antiretroviral therapy
Aspartate aminotransferase
Direct acting antiviral
G/P
Glecaprevir/pibrentasvir
Intent-to-treat
LLOQ
Lower limit of quantification
MedDRA
Medical Dictionary for Regulatory Activities
Modified intent-to-treat
PWUD
People who use drugs
Sustained virologic response
SVR12
Sustained virologic response at 12 weeks post-treatment
ULN
Upper limit of normal
Enhanced Digital Features
To view enhanced digital features for this article go to https://doi.org/10.6084/m9.figshare.9924842.
The online version of this article ( https://doi.org/10.1007/s12325-019-01123-0) contains supplementary material, which is available to authorized users.
Key Summary Points
Why carry out this study?
The presence or absence of cirrhosis in patients with chronic hepatitis C virus (HCV) infection influences the type and duration of antiviral therapy.
Non-invasive markers, like serum aspartate aminotransferase (AST) to platelet ratio index (APRI), may help identify appropriate HCV treatment-naive patients for 8-week treatment with the pangenotypic regimen of glecaprevir/pibrentasvir in countries where 8-week G/P treatment in patients with compensated cirrhosis may take longer to get approved.
What was learned from the study?
Glecaprevir/pibrentasvir was highly efficacious and well tolerated in HCV treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis.
Use of APRI may help HCV elimination efforts by simplifying care pathways and treatment scale-up in community-based settings with glecaprevir/pibrentasvir.
Elimination of chronic hepatitis C virus (HCV) infection is now achievable owing to powerful drug combinations, like glecaprevir/pibrentasvir (G/P), that get rid of all major types of the virus in patients; however, such elimination efforts led by the World Health Organization (WHO) depend on the availability of low-cost and scalable testing as well as access to the drugs. In order to determine if a simple and low-cost blood test called aspartate aminotransferase to platelet ratio index (APRI) can be used to select appropriate patients for shorter duration treatment, this study evaluated whether 8-week G/P is safe and eliminates the virus in patients with an APRI less than a predetermined threshold of 1, no prior HCV treatment experience, and no evidence of liver scarring or damage. We found that 8-week G/P in these selected patients was safe and eliminated the virus in over 96% of patients with no one failing to respond to treatment or losing their initial response; thus, this predetermined APRI threshold could be used in clinical practice as a simplified pretreatment assessment to select patients with chronic HCV, no prior HCV treatment experience, and no evidence of liver scarring or damage for the 8-week G/P regimen.
Chronic hepatitis C virus (HCV) infection afflicts 71 million people worldwide [1]. Recently, the population with chronic HCV has shifted from an older population with cirrhosis to an increasingly younger HCV treatment-naïve population without cirrhosis particularly as a result of its spread among people who use drugs (PWUDs) [1, 2]. When left untreated, patients with chronic HCV are at increased risk for liver cirrhosis, and liver-related and all-cause mortality [3, 4]. In the past decade, novel direct acting antivirals (DAAs) have revolutionized the treatment of chronic HCV infection by yielding rates of over 90% sustained virologic response (SVR) in both treatment-naïve and experienced patients [5]. Achievement of SVR decreases long-term health risks associated with chronic HCV infection and improves patient quality of life, thereby delivering a cost-effective treatment for patients with chronic HCV infection [6, 7, 8, 9]. However, despite the availability and benefit of DAAs, linkage to care following HCV diagnosis continues to be a gap in the HCV care cascade due, in part, to the need for specialists to identify the presence or absence of cirrhosis using either liver biopsy or transient elastography prior to initiating treatment [10, 11, 12].
Achievement of the World Health Organization (WHO) global target for HCV elimination by 2030 depends on access to low-cost and scalable testing as well as access to effective DAA treatment [1, 10]. Current treatment guidelines in the USA and Europe recommend both genotype (GT) and fibrosis or cirrhosis testing in order to determine the most suitable DAA regimen and treatment duration [11, 12]. Although liver biopsy and transient elastography have primarily been utilized for fibrosis or cirrhosis testing, non-invasive markers, including the aspartate aminotransferase to platelet ratio index (APRI), may be used to assess for cirrhosis prior to HCV treatment according to current WHO, European, Australian, and Canadian guidelines [11, 13, 14, 15]. APRI is determined from a blood test, thereby providing a low-cost, widely available, non-invasive method that has high negative predictive value (94%) for cirrhosis at an APRI cutoff of 1.0 compared to liver biopsy [16, 17].
Glecaprevir (a potent pangenotypic NS3/4A protease inhibitor identified by AbbVie and Enanta) plus pibrentasvir (a potent pangenotypic NS5A inhibitor), co-formulated as G/P, is an efficacious and safe DAA regimen approved for the treatment of patients with chronic HCV GT1–6 infection and compensated liver disease with or without cirrhosis, including those co-infected with HIV or with severe renal impairment [18, 19, 20]. G/P is approved for an 8-week treatment duration in patients with chronic HCV GT1–6 infection and without cirrhosis on the basis of data from its registrational trials, which demonstrated comparably high rates (at least 95%) of sustained virologic response at 12 weeks post-treatment (SVR12) with either 8 or 12 weeks of treatment [20, 21]. There was no significant difference observed in SVR12 rates between the 8- and 12-week durations in patients without cirrhosis regardless of baseline patient or disease characteristics analyzed, including patients who were HCV treatment-naïve and those with a pretreatment APRI < 1 or ≥ 1 [20]. Real-world evidence has been consistent with the G/P registrational trials, demonstrating that 8-week G/P is highly effective and well tolerated [22, 23]. However, a prospective study is necessary to determine if the use of non-invasive biomarker APRI can simplify the selection of patients for 8-week G/P treatment.
In the current study, we aim to determine whether a screening APRI ≤ 1 can be used to select appropriate patients for an 8-week G/P treatment duration by evaluating the efficacy and safety of 8-week G/P in HCV treatment-naïve patients with a screening APRI ≤ 1. Although 8-week treatment with G/P regimen has recently been approved for the treatment of treatment-naïve patients with GT1, 2, 4, 5, and 6 infections and with compensated cirrhosis in the USA and EU [24], we believe that APRI screening is useful in countries where 8-week G/P treatment in patients with compensated cirrhosis may take longer to get approved.
This multicenter, open-label, single-arm, prospective phase III trial assessed the efficacy and safety of 8-week G/P regimen in treatment-naïve patients with APRI ≤ 1 at screening. Patients received three co-formulated tablets containing 100 mg glecaprevir and 40 mg pibrentasvir once daily with food for 8 weeks (total dose of 300 mg of glecaprevir and 120 mg pibrentasvir). Patients were enrolled in ten countries (Bulgaria, Canada, France, Germany, Poland, Puerto Rico, Russia, Spain, UK, and USA) across 43 sites. All patients provided written informed consent prior to screening. All authors had access to the study data, and reviewed and approved the final manuscript.
Patients were eligible if they were male or female, at least 18 years old at screening, and were positive for anti-HCV with plasma HCV RNA ≥ 1000 IU/mL for at least 6 months prior to and at screening. Women were eligible if they were not pregnant or breastfeeding, or were sterile or practicing one method of birth control. Patients were selected for the study if they had an APRI score ≤ 1 at screening and were HCV treatment-naïve. Patients with HCV GT1, 2, 3, 4, 5, or 6 infection were eligible for the study, including patients with mixed or indeterminate GT. Patients with human immunodeficiency virus type 1 (HIV-1) were eligible for the study if they were HIV treatment-naïve or on a stable, qualifying antiretroviral therapy (ART) regimen. Patients with drug or alcohol misuse could enroll unless they were considered an unsuitable candidate for the study by the site investigator as a result of recent misuse (within 6 months prior to study drug administration).
Patients were eligible unless they had evidence of cirrhosis in previous or current medical assessments; however, liver biopsy or transient elastography was not required to be performed for study eligibility. Patients were excluded if they met the following pre-defined laboratory values: platelet count < 150,000 cells/mm3, alanine aminotransferase (ALT) > 10× upper limit of normal (ULN), aspartate aminotransferase (AST) > 10× ULN, direct bilirubin > ULN, and/or albumin < lower limit of normal. Patients were required to be HBsAg and anti-HBc negative, or HBV DNA < lower level of quantification (LLOQ) with an isolated positive anti-HBc. Patients who had chronic kidney disease stage 4 or 5 (calculated creatinine clearance < 30 mL/min), previous organ transplantation, or a history of hepatocellular carcinoma (HCC) were excluded. Patients were required to discontinue prohibited medications or supplements at least 14 days or ten half-lives, whichever was longer, prior to the first G/P dose. Complete patient eligibility criteria are provided in the Supplementary Appendix.
APRI was assessed at screening based on concurrent measures for AST and platelet count. The following formula was used to calculate APRI:
$${\text{APRI}}\, = \,\left( {\frac{{\tfrac{{{\text{AST}}\left( {\tfrac{{{\text{Units}}}}{{{\text{Liter}}}}} \right)}}{{{\text{AST}}({\text{Upper}}\;{\text{Limit}}\;{\text{of}}\;{\text{Normal}}\;[{\text{ULN}}]\left( {\tfrac{{{\text{Units}}}}{{{\text{Liter}}}}} \right)}}}}{{{\text{Platelet}}\;{\text{Count}}\left( {\tfrac{{10^{9} }}{{{\text{Liter}}}}} \right)}}} \right)$$
Real-time reverse transcriptase-polymerase chain reaction (RT-PCR) was used to quantify plasma HCV RNA for both baseline viral load and SVR12 assessments. HCV genotype was determined using the Versant® HCV Genotype Inno LiPA Assay, Version 2.0 or higher (Siemens Healthcare Diagnostics, Tarrytown, NY), and confirmed by phylogenetic analysis of viral sequences. SVR12 was assessed as HCV RNA < LLOQ 12 weeks after last G/P dose for all patients receiving at least one G/P dose in both intent-to-treat (ITT) and a modified ITT (mITT) that excluded patients not achieving SVR12 for reasons other than virologic failure (e.g., premature G/P discontinuation or missing HCV RNA data 12 weeks after last G/P dose). Treatment adherence was assessed comparing pills taken with expected pill count.
Safety was evaluated by physical examination, vital signs, electrocardiogram, clinical laboratory testing, and adverse events (AEs) monitoring throughout the duration of the study. All AEs were coded using the Medical Dictionary for Regulatory Activities (MedDRA) and were assessed for their relationship to G/P by study investigators.
Efficacy of 8-week G/P treatment was assessed using a fixed sequence testing procedure. The primary endpoint was the percentage of patients with SVR12 in the mITT population. If this primary endpoint was met, the secondary endpoint about the percentage of patients with SVR12 in the ITT population would be evaluated in the sequential testing. Non-sequential secondary efficacy endpoints were the percentage of patients with on-treatment virologic failure or post-treatment relapse in the ITT population. Additional analysis assessed treatment adherence, defined as use of at least 80% and at most 120% of tablets taken relative to the expected total number of tablets to be taken. Safety was evaluated by the number and percentage of patients with treatment-emergent AEs and laboratory abnormalities, and through characterization of reported AEs.
The study enrolled 230 patients with chronic HCV GT1–6 infection, including 35 with GT3 infection, in order to achieve 90% and 83% power to demonstrate efficacy compared with pre-specified thresholds for the mITT and ITT populations, respectively. The pre-specified thresholds were set on the basis of historical SVR12 rates from the G/P registrational trials [21]. The number and percentage of patients in both mITT and ITT population achieving SVR12 were summarized with two-sided 95% confidence intervals (CI) calculated using the normal approximation to the binomial distribution. If the number of SVR12 non-responders was less than five, then Wilson's score method would be used to calculate the confidence interval. The primary efficacy endpoint in the mITT population was met if the lower bound of the 95% CI was greater than the pre-specified threshold of 92.4% based on the historical rate observed in the G/P registrational studies in treatment-naïve patients without cirrhosis (98.4%) minus 6%. The secondary efficacy endpoint of SVR12 in the ITT population was met if the lower bound of the 95% CI was greater than 91.4% based on the mITT threshold minus an expected 1% rate of non-virologic failure.
Compliance with Ethics Guidelines
The trial was conducted in accordance with Good Clinical Practice and the Declaration of Helsinki, and was approved at all sites by their independent ethics committee or institutional review board prior to enrollment. The master ethics committee of this study is the Quorum Institutional Review Board. A complete list of institutional ethics committees or institutional review boards is provided in Supplementary Table S3.
Between August 7, 2017 and August 13, 2018, 230 treatment-naïve patients with chronic HCV genotypes 1–6 infection and APRI ≤ 1 were enrolled (Fig. 1). Table 1 summarizes the complete baseline demographics and disease characteristics for patients in the mITT and ITT populations. Overall, in the ITT population, patients (n, %) were predominantly Caucasian (207, 90%) and less than 65 years old (207, 90%), and most patients had GT1 infection (151, 66%) and a screening APRI ≤ 0.5 (140, 61%). There were 87 (38%) patients with a history of injection drug use (most of which were from more than 12 months ago) and 99 (43%) patients with a history of a psychiatric disorder, including 45 (20%) with a history of depression or bipolar disorder. Although all 230 patients were HCV treatment-naïve, 31% (71/230) of patients presented with a key baseline NS5A polymorphism. At baseline, a blinded FibroTest assessment yielded a median value of 0.26 (range 0.02–0.87).
Open image in new window
Trial profile. *Patients who were screening failures are counted under each reason given for screen fail; therefore, the sum of the counts given for the reasons for screen fail may be greater than the overall number of screen failures. †Majority of patients who did not meet the eligibility criteria was not due to failure to meet APRI score of ≤ 1 at the time of screening
Baseline demographics and patient characteristics
mITT population N = 222
ITT population N = 230
Male, n (%)
Race, n (%)
Hispanic or Latino ethnic origin, n (%)
Age, median (range), years
Age ≥ 65 years old, n (%)
BMI, median (range), kg/m2
25.3 (16.9–55.6)
Baseline HCV RNA level, median (range), log10 IU/mL
6.3 (2.2–7.7)
Baseline HCV RNA ≥ 1 million IU/mL, n (%)
HCV genotype, n (%)
GT1a
GT1b
GT1i
1 (< 1)
Key baseline polymorphisms, n (%)a
Any NS3 polymorphism
Any NS5A polymorphism
Screening APRI, median (range)
0.41 (0.13–1.0)
Screening APRI, n (%)
≤ 0.5
Blinded fibrotest, median (range)b
0.26 (0.02–0.87)
FIB-4, median (range)
Platelet count, median (range), count/109/L
243 (126–462)
HIV co-infection, n (%)c
CD4+ T cell countd, median (range), cells/mm3
762 (444–1199)
History of injection drug use, n (%)
Within the last 12 months
More than 12 months ago
On stable opiate substitution, n (%)
History of diabetes, n (%)
History of depression or bipolar disorder
BMI body mass index, HCV hepatitis C virus, GT genotype, APRI aspartate aminotransferase to platelet ratio, HIV human immunodeficiency virus
aIncludes any baseline resistance-associated variants in NS3 (155, 156, and 168) or NS5A (24, 28, 30, 31, 58, 92, and 93) at a 15% detection threshold. No patients had both an NS3 and an NS5A key resistance-associated variant
bPerformed at baseline, blinded to the investigators and therefore not used for patient eligibility. A value > 0.80 has a high positive predictive value (PPV) for cirrhosis [16]
cAll patients with HIV co-infection were antiretroviral therapy-naïve
dOnly assessed in the 10 HIV/HCV co-infected patients
Efficacy Outcomes
For the primary efficacy endpoint, overall SVR12 rate by mITT analysis was 100% (222/222; 95% CI 98.3–100%) with no patients experiencing virologic failure (Fig. 2). The primary efficacy endpoint for the study was met since the lower bound of the 95% CI (98.3%) was greater than the pre-specified threshold of 92.4%. All patients with baseline NS3 or NS5A polymorphisms achieved SVR12 (70/70; 100%).
Efficacy of 8-week G/P regimen in HCV treatment-naïve patients with APRI ≤ 1. G/P efficacy, defined as SVR12, is reported overall using modified intent-to-treat (mITT; blue) and intent-to-treat (ITT; green) analyses. Bar graphs show mean with 95% confidence intervals and include reasons for non-response. Dotted lines indicate threshold above which lower bound of mITT and ITT analysis must be greater than in order to meet primary and secondary endpoint, respectively. *Includes two patients who prematurely discontinued because of a serious AE (see Table 2). †All five patients missing SVR12 data had no detectable HCV RNA at the end of treatment
For the sequential secondary efficacy endpoint, overall SVR12 rate in the ITT population was 96.5% (222/230, 95% CI 94.2–98.9%). The secondary efficacy endpoint was met since the lower bound of the 95% CI (94.2%) was greater than the threshold of 91.4%. Eight (3%) patients did not achieve SVR12 because of non-virologic failure reasons, specifically three (1%) due to premature G/P discontinuations and five (2%) due to missing HCV RNA data 12 weeks after last dose of G/P. Overall, four (2%) patients prematurely discontinued G/P; one achieved SVR12 despite discontinuing G/P treatment at day 12 after becoming pregnant during treatment. Among the patients failing to achieve SVR12, two patients discontinued because of adverse events at days 8 and 15, respectively, while one patient discontinued at day 29 because of non-compliance. All five patients missing SVR12 data had no detectable HCV RNA at their last treatment visit (four at post-treatment week 4 and one at end of treatment).
Additional endpoints included adherence to G/P treatment and SVR12 by HCV genotype. Among all patients with available data at all treatment visits, 99% (202/204) of patients were adherent to treatment. One patient who was non-adherent achieved SVR12, while the other patient did not achieve SVR12 after prematurely discontinuing G/P at day 29 because of non-adherence as mentioned above. SVR12 rates by HCV genotype are reported in Fig. 3 using both mITT and ITT analyses. High SVR12 rates were observed in all HCV genotypes.
Efficacy by genotype for 8-week G/P regimen in HCV treatment-naïve patients with APRI ≤ 1. G/P efficacy, defined as SVR12, is reported by HCV genotype using modified intent-to-treat (mITT; blue) and intent-to-treat (ITT; green) analyses. Bar graphs show mean with 95% confidence intervals. *One GT1 patient with subtype GT1i achieved SVR12
Safety Outcomes
Overall, 124/230 (54%) patients experienced an AE, of which eight (3%) patients had a grade 3 or higher AE (Table 2). The most common AEs occurring in at least 5% of patients were headache (13%) and fatigue (7%). Among the four (2%) patients who experienced a serious AE (listed in Supplementary Table 1), two (1%) patients experienced serious AEs of angioedema that led to premature G/P discontinuation on days 8 and 15, respectively. After G/P was discontinued, both cases of angioedema resolved within 7 and 3 days, respectively. Both patients were Black or African American, former drug users, taking an angiotensin-converting enzyme (ACE) inhibitor (lisinopril), and had HIV co-infection (see Supplementary Table 2 for more information).
Adverse events and laboratory abnormalities
Event, n (%)
Eight-week G/P treatment, N = 230
Any AE
Grade ≥ 3 AE
Serious AE
DAA-relateda serious AE
2 (< 1)b
AE leading to premature G/P discontinuation
AEs occurring in ≥ 5% of all patients by preferred term
Laboratory abnormalities (grade ≥ 3)
ALT > 5 × ULNc
AST > 5 × ULN
Total bilirubin > 3 × ULN
G/P glecaprevir/pibrentasvir, AE adverse event, DAA direct acting antiviral, ALT alanine aminotransferase, ULN upper limit of normal, AST aspartate aminotransferase
aAs assessed by the investigator
bTwo patients experienced a serious AE of angioedema leading to premature G/P discontinuation on days 8 and 15, respectively
cPost-nadir increase in grade to grade ≥ 3
There were no laboratory abnormalities in ALT, AST, or total bilirubin. There were no events consistent with hepatic decompensation, hepatic failure, or drug-induced liver injury.
Using APRI ≤ 1 as a selection tool for an 8-week G/P regimen yielded high SVR12 rates and no virologic failures among treatment-naïve patients with chronic HCV GT1–6 infection and no prior evidence of cirrhosis. This finding suggests that simplification of pretreatment testing is feasible specifically amongst the growing population of patients with chronic HCV infection that are being evaluated and treated in the community-based setting, many of whom are younger, HCV treatment-naïve, and have less severe liver disease [1, 2]. In this emerging population that includes high proportions of PWUDs and patients with psychiatric disorders, treatment adherence is a concern that has persisted despite clinical trial data demonstrating high adherence [25, 26]. Similar to previous findings, 99% of patients in our study were adherent despite the inclusion of patients with histories of injection drug use and psychiatric disorders [25, 27, 28]. Overall, the current data are consistent with previous findings from the G/P registrational trials since efficacy remained high regardless of HCV genotype and presence of baseline NS3 or NS5A polymorphisms [20].
The 8-week G/P regimen was safe and well tolerated in these HCV treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis, consistent with previous data reported with G/P. The most common AEs were headache and fatigue, occurring in a comparably low proportion of the patient population [19, 20]. Rates of AEs leading to premature discontinuation (1%) and serious AEs (2%) were also similarly low in this study compared with prior integrated analysis of G/P safety [19, 20]. While there was only one non-serious case of angioedema among the 2369 patients within the G/P registrational trials, two serious cases of angioedema were reported in this study, both leading to premature G/P discontinuation. Both cases, however, were attributed to concomitant use of an ACE inhibitor (lisinopril). While there is no clinically significant pharmacokinetic interaction between G/P and lisinopril, there is a clear link between ACE inhibitors, like lisinopril, and angioedema, especially among African Americans [21, 29]. Consistent with the low rates of laboratory abnormalities observed in the registrational trials, no grade 3 or higher laboratory abnormalities in ALT, AST, or total bilirubin were observed [19]. Overall, the favorable safety profile of the 8-week G/P regimen in HCV treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis demonstrated in this study is consistent with safety data from the registrational trials and post-marketing real-world evidence, supporting the safety of the 8-week regimen in this patient population [20, 22, 23].
Since 8 weeks of this pangenotypic DAA regimen was both efficacious and safe in HCV treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis, this study suggests that pretreatment testing can be further simplified when using a pangenotypic therapy such as G/P. First, as recommended by the European Association for the Study of the Liver (EASL) and WHO guidelines, genotyping is not necessary when using G/P in all treatment-naïve patients owing to its high efficacy across all genotypes [11, 13]. Second, on the basis of our findings, the well-studied, widely available, low-cost blood test for APRI can be used to determine G/P treatment duration, thereby removing the need for a specialist to perform more invasive and costly screening tests for cirrhosis prior to treatment initiation [16]. Using this simplified screening approach, treatment-naive patients can rapidly initiate treatment with G/P in community-based settings by triaging to primary care providers, while more invasive cirrhosis testing and follow-up in more well-resourced or specialized settings can be used as a second-line test in HCV treatment-naïve patients with APRI > 1. This approach could reduce the need for liver biopsy or transient elastography especially given the growing population of younger treatment-naïve patients with chronic HCV infection that are less likely to be cirrhotic. Patients with prior HCV treatment experience will still require both genotyping and more comprehensive cirrhosis testing to determine G/P treatment duration. Thus, 8-week G/P regimen for these treatment-naïve patients with APRI ≤ 1 and no prior evidence of cirrhosis provides a simplified and shortened treatment program which may improve health benefits and save costs for healthcare systems [30]. Further simplification may be possible on the basis of preliminary results from EXPEDITION-8 that show high SVR12 rates with 8-week G/P treatment in patients with chronic HCV GT1, 2, 4, 5, or 6 infection and compensated cirrhosis; however, 8-week G/P treatment is currently not a recommended regimen for patients with prior evidence of cirrhosis [31].
There are limitations to this study inherent to its design. This was a single-arm, open-label trial without a placebo or active control; however, the use of objective measures for efficacy (SVR12) and the comparison with a pre-specified threshold considering historical reference SVR12 rates from G/P registrational trials mitigates this concern. Given that these data are from a controlled clinical trial, further real-world data using this approach for simplification of pretreatment testing is necessary to validate its use in clinical practice.
These data support the use of APRI ≤ 1 in clinical practice as a simplified pretreatment assessment to select treatment-naïve patients with chronic HCV GT1–6 infection and no prior evidence of cirrhosis for the 8-week G/P regimen. Use of this approach could aid in HCV elimination efforts by simplifying care pathways and treatment scale-up in community-based settings by non-specialist providers for HCV treatment-naïve patients without prior evidence of cirrhosis.
AbbVie and authors thank all the trial investigators and the patients who participated in this clinical trial.
This work and the Rapid Service and Open Access Fees were supported by AbbVie. AbbVie sponsored the study (NCT03212521), contributed to its design, data collection, analysis, and interpretation of the data, and participated in the writing, review, and approval of the manuscript.
Medical Writing and/or Editorial Assistance
Medical writing support was provided by Daniel O'Brien, Ph.D., and Salil Sharma, Ph.D., both of AbbVie.
All named authors meet the International Committee of Medical Journal Editors (ICMJE) criteria for authorship for this article, take responsibility for the integrity of the work as a whole, and have given their approval for this version to be published.
Ana Gabriela Pires dos Santos designed the study. Zhenyi Xue led study analyses. Daniel O'Brien did the literature search and provided medical writing support. All authors contributed to the implementation, conduct, data interpretation, writing, and review of this work. All authors approved the final version of the manuscript and the decision to submit to the journal.
Prior Presentation
This work was presented at AASLD (2018) The Liver Meeting.
Robert Fontana: Research support from BMS, Gilead, AbbVie. Consulting: Alynam. Stuart McPherson: Consultancy/speakers fees-Abbvie, Allergan, BMS, Gilead, MSD, Novartis. Sabela Lens: Advisor: Janssen, Gilead, AbbVie. Magdy Elkhashab: Grant support from AbbVie, Advisor for AbbVie. Victor Ankoma-Sey: investigator in an AbbVie-sponsored clinical trial. Mark Bondin is an employee of AbbVie, Inc. and may hold stock or stock options. Ana Gabriela Pires dos Santos is an employee of AbbVie, Inc. and may hold stock or stock options. Roger Trinh is an employee of AbbVie, Inc. and may hold stock or stock options. Zhenyi Xue is an employee of AbbVie, Inc. and may hold stock or stock options. Ariel Porcalla is an employee of AbbVie, Inc. and may hold stock or stock options. Stefan Zeuzem: Consultancies for AbbVie, BMS, Gilead, Janssen, Merck.
AbbVie is committed to responsible data sharing regarding the clinical trials we sponsor. This includes access to anonymized, individual and trial-level data (analysis data sets), as well as other information (e.g., protocols and Clinical Study Reports), as long as the trials are not part of an ongoing or planned regulatory submission. This includes requests for clinical trial data for unlicensed products and indications. This clinical trial data can be requested by any qualified researchers who engage in rigorous, independent scientific research, and will be provided following review and approval of a research proposal and Statistical Analysis Plan (SAP) and execution of a Data Sharing Agreement (DSA). Data requests can be submitted at any time and the data will be accessible for 12 months, with possible extensions considered. For more information on the process, or to submit a request, visit the following link: https://www.abbvie.com/our-science/clinical-trials/clinical-trials-data-and-information-sharing/data-and-information-sharing-with-qualified-researchers.html.
12325_2019_1123_MOESM1_ESM.docx (32 kb)
Supplementary material 1 (DOCX 31 kb)
World Health Organization. Global hepatitis report. Geneva: World Health Organization; 2017.Google Scholar
Huppe D, Serfert Y, Buggisch P, et al. Hepatitis C therapy in Germany: results from the german hepatitis C registry 4 years after approval of the new direct antiviral substances (DAAs). Wiesbaden: Viszeralmedizin; 2018.Google Scholar
Alazawi W, Cunningham M, Dearden J, Foster GR. Systematic review: outcome of compensated cirrhosis due to chronic hepatitis C infection. Aliment Pharmacol Ther. 2010;32(3):344–55.CrossRefGoogle Scholar
Xu F, Moorman AC, Tong X, et al. All-Cause mortality and progression risks to hepatic decompensation and hepatocellular carcinoma in patients infected with hepatitis C virus. Clin Infect Dis. 2016;62(3):289–97.CrossRefGoogle Scholar
Falade-Nwulia O, Suarez-Cuervo C, Nelson DR, Fried MW, Segal JB, Sulkowski MS. Oral direct-acting agent therapy for hepatitis C virus infection: a systematic review. Ann Intern Med. 2017;166(9):637–48.CrossRefGoogle Scholar
Messori A, Badiani B, Trippoli S. Achieving sustained virological response in hepatitis c reduces the long-term risk of hepatocellular carcinoma: an updated meta-analysis employing relative and absolute outcome measures. Clin Drug Investig. 2015;35(12):843–50.CrossRefGoogle Scholar
Liu Z, Wei X, Chen T, Huang C, Liu H, Wang Y. Characterization of fibrosis changes in chronic hepatitis C patients after virological cure: a systematic review with meta-analysis. J Gastroenterol Hepatol. 2017;32(3):548–57.CrossRefGoogle Scholar
Younossi Z, Henry L. Systematic review: patient-reported outcomes in chronic hepatitis C—the impact of liver disease and new treatment regimens. Aliment Pharmacol Ther. 2015;41(6):497–520.CrossRefGoogle Scholar
Cipriano LE, Goldhaber-Fiebert JD. Population health and cost-effectiveness implications of a "treat all" recommendation for HCV: a review of the model-based evidence. MDM Policy Pract. 2018;3(1):2381468318776634.PubMedPubMedCentralGoogle Scholar
Calvaruso V, Petta S, Craxi A. Is global elimination of HCV realistic? Liver Int. 2018;38(Suppl 1):40–6.CrossRefGoogle Scholar
EASL. European Association for the Study of the Liver recommendations on treatment of hepatitis C 2018. J Hepatol. 2018;69:461–511.CrossRefGoogle Scholar
AASLD. HCV guidance: recommendations for testing, managing, and treating hepatitis C. 2018. https://www.hcvguidelines.org/. Accessed 27 Nov 2018.
World Health Organization. Guidelines for the screening care and treatment of persons with chronic hepatitis C infection. Geneva: World Health Organization; 2018.Google Scholar
Australian recommendations for the management of hepatitis C virus infection: a consensus statement (September 2018). 2018. https://www.hepatologyassociation.com.au/. Accessed 22 Oct 2018.
Shah H, Bilodeau M, Burak KW, et al. The management of chronic hepatitis C: 2018 guideline update from the Canadian Association for the Study of the Liver. CMAJ. 2018;190(22):E677–87.CrossRefGoogle Scholar
Chou R, Wasson N. Blood tests to diagnose fibrosis or cirrhosis in patients with chronic hepatitis C virus infection. Ann Intern Med. 2013;158(11):807–20.CrossRefGoogle Scholar
Asselah T, Lens S, Zadeikis N, et al. Analysis of AST to platelet ratio index (APRI) for determining eligibility for 8 weeks of glecaprevir/pibrentasvir. J Viral Hepat. 2018;25(S2):19–20.Google Scholar
Rockstroh JK, Lacombe K, Trinh R, et al. Efficacy and safety of glecaprevir/pibrentasvir in patients co-infected with hepatitis C virus and human immunodeficiency virus-1: the EXPEDITION-2 study. Clin Infect Dis. 2018;67(7):1010–7.CrossRefGoogle Scholar
Gane E, Poordad F, Zadeikis N, et al. Safety and pharmacokinetics of glecaprevir/pibrentasvir in adults with chronic genotype 1–6 HCV infection and compensated liver disease. Clin Infect Dis. 2019. https://doi.org/10.1093/cid/ciz022.CrossRefGoogle Scholar
Puoti M, Foster GR, Wang S, et al. High SVR12 with 8-week and 12-week glecaprevir/pibrentasvir therapy: an integrated analysis of HCV genotype 1–6 patients without cirrhosis. J Hepatol. 2018;69(2):293–300.CrossRefGoogle Scholar
MAVYRET (glecaprevir and pibrentasvir) [package insert]. North Chicago: Approved on August 2017. https://www.rxabbvie.com/pdf/mavyret_pi.pdf.
D'Ambrosio R, Pasulo L, Puoti M, et al. Real-life effectiveness and safety of glecaprevir/pibrentasvir in 723 patients with chronic hepatitis C. J Hepatol. 201970(3):379–87.Google Scholar
Wiegand J, Naumann U, Stoehr A, et al. Glecaprevir/pibrentasvir for the treatment of patients with chronic hepatitis c virus infection: updated real-world data from the german hepatitis C-registry. Hepatology. 2018;68(S1):364A.Google Scholar
MAVIRET (SmPC); AbbVie 2019/MAVYRET (US package insert); AbbVie 2019.Google Scholar
Grebely J, Hajarizadeh B, Dore GJ. Direct-acting antiviral agents for HCV infection affecting people who inject drugs. Nat Rev Gastroenterol Hepatol. 2017;14(11):641–51.CrossRefGoogle Scholar
Back D, Belperio P, Bondin M, et al. Integrated efficacy and safety of glecaprevir/pibrentasvir in patients with psychiatric disorders. J Hepatol. 2018;68(S1):S280–1.CrossRefGoogle Scholar
Brown AS, Welzel TM, Conway B, et al. Adherence to pangenotypic glecaprevir/pibrentasvir treatment and SVR12 in HCV-infected patients: an integrated analysis of the phase 2/3 clinical trial program. J Hepatol. 2017;66(S1):114A–5A.Google Scholar
Foster GR, Grebely J, Sherman KE, et al. Safety and efficacy of glecaprevir/pibrentasvir in patients with chronic hepatitis C genotypes 1–6 and recent drug use. Hepatology. 2017;66(S1):636A–637A.Google Scholar
Kamil RJ, Jerschow E, Loftus PA, et al. Case-control study evaluating competing risk factors for angioedema in a high-risk population. Laryngoscope. 2016;126(8):1823–30.CrossRefGoogle Scholar
Feld JJ, Sanchez Gonzalez Y, Pires dos Santos AG, Ethgen O. Clinical benefits, economic savings and faster time to HCV elimination with a simplified 8-week treatment and monitoring program in chronic F0-F3 naive patients in the US. Hepatology. 2018;68(S1):408A–409A.Google Scholar
Brown RS Jr, Hezode C, Wang S, et al. Preliminary efficacy and safety of 8-week glecaprevir/pibrentasvir in patients with HCV genotypes 1–6 infection and compensated cirrhosis: the EXPEDITION-8 study. Hepatology. 2018;68(S1):425A–6A.Google Scholar
© The Author(s) 2019
Open AccessThis article is distributed under the terms of the Creative Commons Attribution-NonCommercial 4.0 International License (http://creativecommons.org/licenses/by-nc/4.0/), which permits any noncommercial use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
1.Division of Gastroenterology and HepatologyUniversity of MichiganAnn ArborUSA
2.Liver Unit, Hospital Clinic, IDIBAPS and CIBERehdUniversity of BarcelonaBarcelonaSpain
3.Liver UnitThe Newcastle upon Tyne Hospitals NHS Foundation and Trust and Newcastle UniversityNewcastle upon TyneUK
4.Toronto Liver CentreTorontoCanada
5.Division of Gastroenterology, Sherri and Alan Conover Center for Liver Disease and TransplantationHouston MethodistHoustonUSA
6.AbbVie, Inc.North ChicagoUSA
7.J.W. Goethe UniversityFrankfurtGermany
Fontana, R.J., Lens, S., McPherson, S. et al. Adv Ther (2019) 36: 3458. https://doi.org/10.1007/s12325-019-01123-0
Received 27 August 2019
Publisher Name Springer Healthcare
Print ISSN 0741-238X
Published in cooperation with
|
CommonCrawl
|
Effect of different concentrations and ratios of ammonium, nitrate, and phosphate on growth of the blue-green alga (cyanobacterium) Microcystis aeruginosa isolated from the Nakdong River, Korea
Kim, Hocheol;Jo, Bok Yeon;Kim, Han Soon 275
https://doi.org/10.4490/algae.2017.32.10.23 PDF KSCI
Microcystis aeruginosa causes harmful algal blooms in the Nakdong River of Korea. We studied the effect of different concentrations and ratios of ammonium ($NH_4{^+}$), nitrate ($NO_3{^-}$), and phosphate ($PO{_4}^{3-}$) on growth of this species in BG-11 medium: each nutrient alone, $NO_3{^-}:NH_4{^+}$ ratio, the N : P ratio with fixed total N (TN), and the N : P ratio with fixed total P (TP). The single nutrient experiments indicated that M. aeruginosa had the highest growth rate at $NH_4{^+}$ and $NO_3{^-}$ concentrations of $500{\mu}M$, and at a $PO{_4}^{3-}$ concentration of $5{\mu}M$. The $NO_3{^-}:NH_4{^+}$ ratio experiments showed that M. aeruginosa had the highest growth rate at a ratio of 1 : 1 when TN was $100{\mu}M$ and $250{\mu}M$, and the lowest growth rate at a ratio of 1 : 1 when the TN was $500{\mu}M$. The N : P ratio with fixed TN experiments indicated that M. aeruginosa had the highest growth rates at 50 : 1, 20 : 1, and 100 : 1 ratios when the TN was 100, 250, and $500{\mu}M$, respectively. In contrast, the N : P ratio with fixed TP experiments showed that M. aeruginosa had the highest growth rates at 200 : 1 ratio at all tested TP concentrations. In conclusion, our results imply that the $NO_3{^-}:NH_4{^+}$ ratio and the $PO{_4}^{3-}$ concentration affect the early stage of growth of M. aeruginosa. In particular, our results suggest that the maximum growth of M. aeruginosa is not simply affected by the $NO_3{^-}:NH_4{^+}$ ratio and the N : P ratio, but is determined by the TN concentration if a certain minimum $PO{_4}^{3-}$ concentration is present.
Ichthyotoxic Cochlodinium polykrikoides red tides offshore in the South Sea, Korea in 2014: III. Metazooplankton and their grazing impacts on red-tide organisms and heterotrophic protists
Lee, Moo Joon;Jeong, Hae Jin;Kim, Jae Seong;Jang, Keon Kang;Kang, Nam Seon;Jang, Se Hyeon;Lee, Hak Bin;Lee, Sang Beom;Kim, Hyung Seop;Choi, Choong Hyeon 285
Cochlodinium polykrikoides red tides have caused great economic losses in the aquaculture industry in many countries. To investigate the roles of metazooplankton in red tide dynamics of C. polykrikoides in the South Sea of Korea, the abundance of metazooplankton was measured at 60 stations over 1- or 2-week intervals from May to November 2014. In addition, the grazing impacts of dominant metazooplankton on red tide species and their potential heterotrophic protistan grazers were estimated by combining field data on the abundance of red tide species, heterotrophic protist grazers, and dominant metazooplankton with data obtained from the literature concerning ingestion rates of the grazers on red tide species and heterotrophic protists. The mean abundance of total metazooplankton at each sampling time during the study was 297-1,119 individuals $m^{-3}$. The abundance of total metazooplankton was significantly positively correlated with that of phototrophic dinoflagellates (p < 0.01), but it was not significantly correlated with water temperature, salinity, and the abundance of diatoms, euglenophytes, cryptophytes, heterotrophic dinoflagellates, tintinnid ciliates, and naked ciliates (p > 0.1). Thus, dinoflagellate red tides may support high abundance of total metazooplankton. Copepods dominated metazooplankton assemblages at all sampling times except from Jul 11 to Aug 6 when cladocerans and hydrozoans dominated. The calculated maximum grazing coefficients attributable to calanoid copepods on C. polykrikoides and Prorocentrum spp. were 0.018 and $0.029d^{-1}$, respectively. Therefore, calanoid copepods may not control populations of C. polykrikoides or Prorocentrum spp. Furthermore, the maximum grazing coefficients attributable to calanoid copepods on the heterotrophic dinoflagellates Polykrikos spp. and Gyrodinium spp., which were grazers on C. polykrikoides and Prorocentrum spp., respectively, were 0.008 and $0.047d^{-1}$, respectively. Therefore, calanoid copepods may not reduce grazing impact by these heterotrophic dinoflagellate grazers on populations of the red tide dinoflagellates.
Interactions between the voracious heterotrophic nanoflagellate Katablepharis japonica and common heterotrophic protists
Kim, So Jin;Jeong, Hae Jin;Jang, Se Hyeon;Lee, Sung Yeon;Park, Tae Gyu 309
Recently, the heterotrophic nanoflagellate Katablepharis japonica has been reported to feed on diverse red-tide species and contribute to the decline of red tides. However, if there are effective predators feeding on K. japonica, its effect on red tide dynamics may be reduced. To investigate potential effective protist predators of K. japonica, feeding by the engulfment-feeding heterotrophic dinoflagellates (HTDs) Oxyrrhis marina, Gyrodinium dominans, Gyrodinium moestrupii, Polykrikos kofoidii, and Noctiluca scintillans, the peduncle-feeding HTDs Luciella masanensis and Pfiesteria piscicida, the pallium-feeding HTD Oblea rotunda, and the naked ciliates Strombidium sp. (approximately $20{\mu}m$ in cell length), Pelagostrobilidium sp., and Miamiensis sp. on K. japonica was explored. We found that none of these heterotrophic protists fed on actively swimming cells of K. japonica. However, O. marina, G. dominans, L. masanensis, and P. piscicida were able to feed on heat-killed K. japonica. Thus, actively swimming behavior of K. japonica may affect feeding by these heterotrophic protists on K. japonica. To the contrary, K. japonica was able to feed on O. marina, P. kofoidii, O. rotunda, Miamiensis sp., Pelagostrobilidium sp., and Strombidium sp. However, the specific growth rates of O. marina did not differ significantly among nine different K. japonica concentrations. Thus, K. japonica may not affect growth of O. marina. Our findings suggest that the effect of predation by heterotrophic protists on K. japonica might be negligible, and thus, the effect of grazing by K. japonica on populations of red-tide species may not be reduced by mortality due to predation by protists.
Effects of disturbance timing on community recovery in an intertidal habitat of a Korean rocky shore
Kim, Hyun Hee;Ko, Young Wook;Yang, Kwon Mo;Sung, Gunhee;Kim, Jeong Ha 325
https://doi.org/10.4490/algae.2017.32.12.7 PDF KSCI
Intertidal community recovery and resilience were investigated with quantitative and qualitative perspectives as a function of disturbance timing. The study was conducted in a lower intertidal rock bed of the southern coast of South Korea. Six replicates of artificial disturbance of a $50cm{\times}50cm$ area were made by clearing all visible organisms on the rocky substrate in four seasons. Each of the seasonally cleared plots was monitored until the percent cover data reached the control plot level. There was a significant difference among disturbance timing during the recovery process in terms of speed and community components. After disturbances occurred, Ulva pertusa selectively preoccupied empty spaces quickly (in 2-4 months) and strongly (50-90%) in all plots except for the summer plots where non-Ulva species dominated throughout the recovery period. U. pertusa acted as a very important biological variable that determined the quantitative and qualitative recovery capability of a community. The qualitative recovery of communities was rapid in summer plots where U. pertusa did not recruit and the community recovery rate was the lowest in winter plots where U. pertusa was highly recruited with a long duration of distribution. In this study, U. pertusa was a pioneer species while being a dominant species and acted as a clearly negative element in the process of qualitative recovery after disturbance. However, the negative effect of U. pertusa did not occur in summer plots, indicating that disturbance timing should be considered as a parameter in understanding intertidal community resilience in temperate regions with four distinct seasons.
Growth, reproduction and recruitment of Silvetia siliquosa (Fucales, Phaeophyceae) transplants using polyethylene rope and natural rock methods
Gao, Xu;Choi, Han Gil;Park, Seo Kyoung;Lee, Jung Rok;Kim, Jeong Ha;Hu, Zi-Min;Nam, Ki Wan 337
Silvetia siliquosa is an ecologically and commercially important brown alga that is harvested from its natural habitats, but its population has recently been diminishing along the Korean coast. To develop new techniques for algal population restoration, we tested two newly developed transplantation methods (using polyethylene ropes and natural rock pieces) at two study sites, Gwanmaedo and Yeongsando, on the southwest coast of Korea, from May to November 2014. The transplants on polyethylene ropes showed significantly greater survival, maturity, and growth than those on natural rocks at both study sites. Newly recruited juveniles (<3 cm) of S. siliquosa increased remarkably from May to December near the transplants on polyethylene ropes and natural rocks.Therefore, we suggest that transplantation using polyethylene ropes is more effective than using natural rocks to restore the population of S. siliquosa in Korea.
Variations in carbon and nitrogen stable isotopes and in heavy metal contents of mariculture kelp Undaria pinnatifida in Gijang, southeastern Korea
Shim, JeongHee;Kim, Jeong Bae;Hwang, Dong-Woon;Choi, Hee-Gu;Lee, Yoon 349
Korean mariculture Undaria pinnatifida was collected during the months of January, February, March, and December of 2010, as well as from January of 2011 to investigate the changes in the carbon and nitrogen stable isotope ratios (${\delta}^{13}C$ and ${\delta}^{15}N$) and heavy metal with respect to it growth and to identify the factors that influence such changes. The blades of U. pinnatifida showed ${\delta}^{13}C$ and ${\delta}^{15}N$ in the range (mean) of -13.11 to -19.42‰ (-16.93‰) and 2.99 to 7.57‰ (4.71‰), respectively. Among samples with the same grow-out period, those that weighed more tended to have higher ${\delta}^{13}C$ suggesting a close association between the carbon isotope ratio and growth rate of U. pinnatifida. Indeed, we found a very high positive linear correlation between the monthly average ${\delta}^{13}C$ and the absolute growth rate in weight ($r^2=0.89$). Nitrogen isotope ratio tended to be relatively lower when nitrogen content in the blade was higher, probably due to the strengthening of isotope fractionation stemming from plenty of nitrogen in the surrounding environment. In fact, a negative linear correlation was observed with the nitrate concentration in the nearby seawaters ($r^2=0.83$). Concentrations of Cu, Cd, Pb, Cr, Hg, and Fe in the blades showed a rapid decrease in their concentration per unit weight in the more mature U. pinnatifida. Specifically, compared to adult samples, Cu, Hg, and Pb were concentrated by 30, 55, and 73 folds, respectively, in the young blades. Therefore, U. pinnatifida tissue ${\delta}^{13}C$ is as an indirect indicator of its growth rate, while ${\delta}^{15}N$ values and heavy metal concentrations serve as tracers that reflect the environmental characteristics.
Molecular cloning and expression analysis of the first two key genes through 2-C-methyl-D-erythritol 4-phosphate (MEP) pathway from Pyropia haitanensis (Bangiales, Rhodophyta)
Du, Yu;Guan, Jian;Xu, Ruijun;Liu, Xin;Shen, Weijie;Ma, Yafeng;He, Yuan;Shen, Songdong 359
Pyropia haitanensis (T. J. Chang et B. F. Zheng) N. Kikuchi et M. Miyata is one of the most commercially useful macroalgae cultivated in southeastern China. In red algae, the biosynthesis of terpenoids through 2-C-methyl-D-erythritol 4-phosphate (MEP) pathway can produce a direct influence on the synthesis of many biologically important metabolites. In this study, two genes of cDNAs, 1-deoxy-D-xylulose-5-phosphate synthase (DXS) and 1-deoxy-D-xylulose-5-phosphate reductase (DXR), which encoding the first two rate-limiting enzymes among MEP pathway were cloned from P. haitanensis. The cDNAs of P. haitanensis DXS (PhDXS) and DXR (PhDXR) both contained complete open reading frames encoding polypeptides of 764 and 426 amino acids residues, separately. The expression analysis showed that PhDXS was significant differently expressed between leafy thallus and conchocelis as PhDXR been non-significant. Additionally, expression of PhDXR and its downstream gene geranylgeranyl diphosphate synthase were both inhibited by fosmidomycin significantly. Meanwhile, we constructed types of phylogenetic trees through different algae and higher plants DXS and DXR encoding amino acid sequences, as a result we found tree clustering consequences basically in line with the "Cavalier-Smith endosymbiotic theory." Whereupon, we speculated that in red algae, there existed only complete MEP pathway to meet needs of terpenoids synthesis for themselves; Terpenoids synthesis of red algae derivatives through mevalonate pathway came from two or more times endosymbiosis of heterotrophic eukaryotic parasitifer. This study demonstrated that PhDXS and PhDXR could play significant roles in terpenoids biosynthesis at molecular levels. Meanwhile, as nuclear genes among MEP pathway, PhDXS and PhDXR could provide a new way of thinking to research the problem of chromalveolata biological evolution.
Protective effect of gallic acid derivatives from the freshwater green alga Spirogyra sp. against ultraviolet B-induced apoptosis through reactive oxygen species clearance in human keratinocytes and zebrafish
Wang, Lei;Ryu, BoMi;Kim, Won-Suk;Kim, Gwang Hoon;Jeon, You-Jin 379
In the present study, we enhanced the phenolic content of 70% ethanol extracts of Spirogyra sp. (SPE, $260.47{\pm}5.21$ gallic acid equivalent $[GAE]mg\;g^{-1}$), 2.97 times to $774.24{\pm}2.61GAE\;mg\;g^{-1}$ in the ethyl acetate fraction of SPE (SPEE). SPEE was evaluated for its antiradical activity in online high-performance liquid chromatography-ABTS analysis, and the peaks with the highest antiradical activities were identified as gallic acid derivatives containing gallic acid, methyl gallate, and ethyl gallate. Isolation of ethyl gallate from Spirogyra sp. was performed for the first time in this study. In ultraviolet B (UVB)-irradiated keratinocytes (HaCaT cells), SPEE improved cell viability by 8.22%, and 23.33% and reduced accumulation of cells in the sub-$G_1$ phase by 20.53%, and 32.11% at the concentrations of 50 and $100{\mu}g\;mL^{-1}$, respectively. Furthermore, SPEE (50 and $100{\mu}g\;mL^{-1}$) reduced reactive oxygen species generation in UVB-irradiated zebrafish by 66.67% and 77.78%. This study suggests a protective activity of gallic acid and its derivatives from Spirogyra sp. against UVB-induced stress responses in both in vitro and in vivo models, suggesting a potential use of SPEE in photoprotection.
|
CommonCrawl
|
Orbit-Stabilizer and Covering Maps
Last week, in our quantum Mechanics class, we were going over symplectic spaces and symplectic transformations. A symplectic space is just a manifold together with a skew-symmetric non-degenerate bilinear form $J$ defined on it, and a symplectic transformation $S$ is a transformation of the manifold into itself such that it preserves $J$. One of the most common examples is when we take our manifold to be $M=\mathbb{F}^{2n}$ and the symplectic form
$J=\begin{pmatrix}0&I_n\\-I_n&0\end{pmatrix}$
where $F$ is a field and $I_n$ is the identity matrix. This is a symplectic manifold, and the set of symplectic transformations is know as $Sp(n,F)$. This is a well known Lie Group acting by multiplication on $M$, and one of its goodness is that this action is transitive, that is, for any non-zero $x,y\in M$, there is $S\in Sp(n,F)$ such that $y=Sx$.
This statement was actually part of our homework, to prove that the action is transitive, and I wanted to find a nicer way to prove it and not to do a proof that I had seen before in my previous courses, so I started thinking a bit of many different ways of saying that this action was transitive.
One way of seen this is by turning around the problem saying what would happen if we let $S$ to run over $Sp(n,F)$ and look at $Sx$ for $x$ fixed? Well, that is saying something like the orbit of $x$ is $M/\{0\}$ and that started to sound a bit familiar.
I was trying then to use some kind of orbit-stabilizer theorem and then use some cardinality argument and kill the problem. Although, I only did remember the finite version of this powerful theorem, which obviously, wouldn't help me at all, but in essence, that was what I was looking for. A cardinality argument would not help me in this situation, because I could have some proper subspace of the same cardinality of $M$ and this wouldn't lead me to the conclusion I was going after. Instead, a dimensionality argument was needed.
While searching for this and thinking what actually was going on behind the scenes in this group action, I saw how helpful is the notion of representation for understanding a strange object.
If $G$ is a Lie Group, we call a representation of it, a vector space $V$ in which $G$ acts on. We can think as $G$ be some sort of subgroup of $Gl(V)$, the set of linear transformations of $V$ into itself. For an element $x$ of $V$, we can talk about the $G$ orbit through $x$, $O_x$ as the set of all $g.x$ for $g\in G$. In some sense, $O_x$ is a copy of the shape of $G$. Also, from the geometrical point of view, a Lie Group is a manifold, endowed with superpowers (group structure) and hence, we can think of these orbits into $V$ as coordinate maps of $G$ given by $\phi(g)=g.x$, so really $O_x$ is how $G$ looks locally.
For example, take $O(2)$, which is the group of all $2\times 2$ matrices $O$ such that $OO^T=I$. This group is quite odd to picture, since it is a 1 dimensional manifold living in a 4 dimensional space, but by means of orbits, one can have a pretty good idea of how this group looks like. By picking a nonzero vector $x$ and looking at its orbit in $\mathbb{R}^2$, one can find that $O(2)$ looks locally like a circle.
In the general case, one can think as $G$ being a covering for $O_x$ and the degree of the cover is the number of connected components of $G$, for instance, in the above case, $O(2)$ has 2 connected components, the set of matrices with determinant equal to 1 and those of determinant equal to -1, and that fact is reflected in $O_x$ as the vector $g.x$ rotates counter clockwise for $O(2)_e$ (the identity component) and rotates clockwise in the other component, so each circle is drawn twice, and that means that $O(2)$ is a 2-fold cover for each $O_x$.
In this language, we can say that the stabilizer $G_X$ of an element $x$, is the fiber $\phi^{-1}(x)$ whose cardinality gives us the degree of the covering map.
Actually, from this point of view, $\phi$ defines a quotient map, which is very suitable for an orbit-stabilizer type argument. Since the stabilizer $G_x$ is a normal subgroup, one can think of $G$ as a principal $G_x$-bundle as $G/G_x\times G_x$ and making the identification $G/G_x\sim O_x$ and $G_x\sim \phi^{-1}(x)$ we have that $G\sim O_x\times\phi^{-1}(x)$.
Going away from counting arguments and going more into dimensionality, I found the so called Orbit-Stabilizer Theorem for Lie Groups which have the same feeling as the covering map approach. It states that
$dim(G)=dim(O_x)+dim(G_s)$
where $dim$ is regarded as manifolds.
In the $O(2)$ case, we have that $dim(G)=1$, $dim(O_x)=1$ and $dim(G_x)=0$ as any of the other cases when $G_x$ is a finite group, and hence, we have that $\phi$ is a quotient map and $dim(G)=dim(O_x)$ as expected from a covering map.
At the end, I didn't use any of these arguments for my proof, but I found quite enjoyable doing this diversion from my first thought.
Tempo Musical y Espacios de Hilbert
|
CommonCrawl
|
NetEcon 2021
The 16th Workshop on the Economics of Networks, Systems and Computation
PROGRAM AND ACCEPTED PAPERS
Program: July 23, 2021, 10AM-12PM EDT
10:00am-10:20am EDT
"Modularity and Mutual Information in Networks: Two Sides of the Same Coin"
by Yongkang Guo, Zhihuan Huang, Yuqing Kong and Qian Wang
Discussant: Christopher Moore
"The Platform Design Problem"
by Christos Papadimitriou, Kiran Vodrahalli and Mihalis Yannakakis
Discussant: Richard Cole
Gather.town: moderated discussion of recent NetEcon-topic papers that have been most influential on you
"Dynamic Posted-Price Mechanisms for the Blockchain Transaction-Fee Market"
by Matheus Venturyne Xavier Ferreira, Daniel J. Moroz, David C. Parkes and Mitchell Stern
Discussant: Yakov Babichenko
"Faithful Federated Learning"
by Meng Zhang, Ermin Wei and Randall Berry
Discussant: Jonathan Newton
11:40am-12:00pm EDT
Gather.town: open discussion
Abstract: Community structure is an important feature of many networks. One of the most popular ways to capture community structure is using a quantitative measure, modularity, which can serve as both a standard benchmark comparing different community detection algorithms, and a optimization objective for detecting communities. Previous works on modularity mainly focus on the approximation method for modularity maximization to detect communities, or minor modifications to the definition.
In this paper, we study modularity from an information-theoretical perspective and show that modularity and mutual information in networks are essentially the same. The main contribution is that we develop a family of generalized modularity measure, $f$-Modularity, which includes the original modularity as a special case. At a high level, we show the significance of community structure is equivalent to the amount of information contained in the network. On the one hand, $f$-Modularity has an information-theoretical interpretation and enjoys the desired properties of mutual information measure. On the other hand, quantifying community structure also provides an approach to estimate the mutual information between discrete random samples with a large value space but given only limited samples. We demonstrate the algorithm for optimizing $f$-Modularity in a relatively general case, and validate it through experimental results on simulated networks. We also apply $f$-Modularity to real-world market networks. Our results bridge two important fields, complex network and information theory, and also shed light on the design of measures on community structure in the future.
Abstract: On-line firms deploy suites of software platforms, where each platform is designed to interact with users during a certain activity, such as browsing, chatting, socializing, emailing, driving, etc. The economic and incentive structure of this exchange, as well as its algorithmic nature, have not been explored to our knowledge. We model this interaction as a Stackelberg game between a Designer and one or more Agents. We model an Agent as a Markov chain whose states are activities; we assume that the Agent's utility is a linear function of the steady-state distribution of this chain. The Designer may design a platform for each of these activities/states; if a platform is adopted by the Agent, the transition probabilities of the Markov chain are affected, and so is the objective of the Agent. The Designer's utility is a linear function of the steady state probabilities of the accessible states (that is, the ones for which the platform has been adopted), minus the development cost of the platforms. The underlying optimization problem of the Agent --- that is, how to choose the states for which to adopt the platform --- is an MDP. If this MDP has a simple yet plausible structure (the transition probabilities from one state to another only depend on the target state and the recurrent probability of the current state) the Agent's problem can be solved by a greedy algorithm. The Designer's optimization problem (designing a custom suite for the Agent so as to optimize, through the Agent's optimum reaction, the Designer's revenue), is in general NP-hard to approximate within any finite ratio; however, in the special case, while still NP-hard, has an FPTAS. These results generalize, under mild additional assumptions, from a single Agent to a distribution of Agents with finite support, as well as to the setting where other Designers have already created platforms, and the Designer must find the best response to the strategies of the other Designers. We discuss other implications of our results and directions of future research.
Abstract: In recent years, prominent blockchain systems such as Bitcoin and Ethereum have experienced explosive growth in transaction volume, leading to frequent surges in demand for limited block space, causing transaction fees to fluctuate by orders of magnitude. Under the standard first-price auction approach, users find it difficult to estimate how much they need to bid to get their transactions accepted (balancing the risk of delay with a preference to avoid paying more than is necessary).
In light of these issues, new transaction fee mechanisms have been proposed, most notably EIP-1559, proposed by \citet{buterin2019eip1559}. A problem with EIP-1559 is that under market instability, it again reduces to a first-price auction. Here, we propose dynamic posted-price mechanisms, which are {\em ex post} Nash incentive compatible for myopic bidders and dominant strategy incentive compatible for myopic miners. We give sufficient conditions for which our mechanisms are stable and approximately welfare optimal in the probabilistic setting where each time step, bidders are drawn i.i.d. from a static (but unknown) distribution. Under this setting, we show instances where our dynamic mechanisms are stable, but EIP-1559 is unstable. Our main technical contribution is an iterative algorithm that, given oracle access to a Lipschitz continuous and concave function $f$, converges to a fixed point of $f$.
Abstract: Federated learning enables machine learning algorithms to be trained over multiple decentralized edge devices without requiring the exchange of local datasets. Successfully deploying federated learning requires ensuring that agents (e.g., mobile devices) faithfully execute the intended algorithm, which has been largely overlooked in the literature. In this study, we first use risk bounds to analyze how the key feature of federated learning, unbalanced and non-i.i.d. data, affects agents' incentives to voluntarily participate and obediently follow traditional federated learning algorithms. Our analysis reveals that agents with less typical data distributions and relatively more samples are more inclined to opt out of or tamper with federated learning algorithms. We then design a Faithful Federated Learning (FFL) mechanism which approximates the Vickrey–Clarke–Groves (VCG) payments via an incremental computation. We show that it achieves (probably approximate) optimality, faithful implementation, voluntary participation, and budget balance. Further, the time complexity of computing all agents' payments in the number of agents is $\mathcal{O}(1)$.
From 2020, the following papers which were presented virtually are carried forward:
"Learning Opinions in Social Networks" by Vincent Conitzer, Debmalya Panigrahi and Hanrui Zhang
Invited Discussants: Wei Chen and David Kempe
Chair: Grant Schoenebeck
Abstract: We study the problem of learning opinions in social networks. The learner observes the states of some sample nodes from a social network, and tries to infer the states of other nodes, based on the structure of the network. We show that sample-efficient learning is impossible when the network exhibits strong noise, and give a polynomial-time algorithm for the problem with nearly optimal sample complexity when the network is sufficiently stable.
"Towards Data Auctions With Externalities" by Anish Agarwal, Munther Dahleh, Thibaut Horel and Maryann Rui
Invited Discussants: Dirk Bergemann and Tan Gan
Chair: Heinrich Nax
Abstract: The design of data markets has gained in importance as firms increasingly use predictions from machine learning models to make their operations more effective, yet need to externally acquire the necessary training data to fit such models. This is particularly true in the context of the Internet where an ever-increasing amount of user data is being collected and exchanged. A property of such markets that has been given limited consideration thus far is the externality faced by a firm when data is allocated to other, competing firms. Addressing this is likely necessary for progress towards the practical implementation of such markets. In this work, we consider the case with n competing firms and a monopolistic data seller. We demonstrate that modeling the utility of firms solely through the increase in prediction accuracy experienced reduces the complex, combinatorial problem of allocating and pricing multiple data sets to an auction of a single digital (freely replicable) good. Crucially, this also enables us to model the negative externalities experienced by a firm resulting from other firms' allocations as a weighted directed graph. We obtain forms of the welfare-maximizing and revenue-maximizing auctions for such settings. and highlight how the form of the firms' private information – whether they know the externalities they exert on others or that others exert on them – affects the structure of the optimal mechanisms. We find that in all cases, the optimal allocation rules turn out to be single thresholds (one per firm), in which the seller allocates all information or none of it to a firm.
"A Closed-Loop Framework for Inference, Prediction and Control of SIR Epidemics on Networks" by Ashish R. Hota, Jaydeep Godbole, Sanket Kumar Singh and Philip E. Pare
Invited Discussants: Kuang Xu and Lei Ying
Chair: Longbo Huang
Abstract: Motivated by the ongoing pandemic COVID-19, we propose a closed-loop framework that combines inference from testing data, learning the parameters of the dynamics and optimal resource allocation for controlling the spread of the susceptible-infected-recovered (SIR) epidemic on networks. Our framework incorporates several key factors present in testing data, such as high risk individuals are more likely to undergo testing and infected individuals can remain as asymptomatic carriers of the disease. We then present two tractable optimization problems to evaluate the trade-off between controlling the growth-rate of the epidemic and the cost of non-pharmaceutical interventions (NPIs). Our results provide critical insights for policy-makers, including the emergence of a second wave of infections if NPIs are prematurely withdrawn.
|
CommonCrawl
|
Psychological Medicine (11)
Ageing & Society (1)
Mineralogical Magazine (1)
The Lichenologist (1)
British Lichen Society (BLS) (1)
British Society of Gerontology (1)
Mineralogical Society (1)
test society (1)
Can we detect Galactic spiral arms? 3D dust distribution in the Milky Way
Sara Rezaei Kh., Coryn A. L. Bailer-Jones, Morgan Fouesneau, Richard Hanson
Journal: Proceedings of the International Astronomical Union / Volume 12 / Issue S330 / April 2017
We present a model to map the 3D distribution of dust in the Milky Way. Although dust is just a tiny fraction of what comprises the Galaxy, it plays an important role in various processes. In recent years various maps of dust extinction have been produced, but we still lack a good knowledge of the dust distribution. Our presented approach leverages line-of-sight extinctions towards stars in the Galaxy at measured distances. Since extinction is proportional to the integral of the dust density towards a given star, it is possible to reconstruct the 3D distribution of dust by combining many lines-of-sight in a model accounting for the spatial correlation of the dust. Such a technique can be used to infer the most probable 3D distribution of dust in the Galaxy even in regions which have not been observed. This contribution provides one of the first maps which does not show the "fingers of God" effect. Furthermore, we show that expected high precision measurements of distances and extinctions offer the possibility of mapping the spiral arms in the Galaxy.
Diagnostic change 10 years after a first episode of psychosis
M. Heslin, B. Lomas, J. M. Lappin, K. Donoghue, U. Reininghaus, A. Onyejiaka, T. Croudace, P. B. Jones, R. M. Murray, P. Fearon, P. Dazzan, C. Morgan, G. A. Doody
Journal: Psychological Medicine / Volume 45 / Issue 13 / October 2015
A lack of an aetiologically based nosology classification has contributed to instability in psychiatric diagnoses over time. This study aimed to examine the diagnostic stability of psychosis diagnoses using data from an incidence sample of psychosis cases, followed up after 10 years and to examine those baseline variables which were associated with diagnostic change.
Data were examined from the ÆSOP and ÆSOP-10 studies, an incidence and follow-up study, respectively, of a population-based cohort of first-episode psychosis cases from two sites. Diagnosis was assigned using ICD-10 and DSM-IV-TR. Diagnostic change was examined using prospective and retrospective consistency. Baseline variables associated with change were examined using logistic regression and likelihood ratio tests.
Slightly more (59.6%) cases had the same baseline and lifetime ICD-10 diagnosis compared with DSM-IV-TR (55.3%), but prospective and retrospective consistency was similar. Schizophrenia, psychotic bipolar disorder and drug-induced psychosis were more prospectively consistent than other diagnoses. A substantial number of cases with other diagnoses at baseline (ICD-10, n = 61; DSM-IV-TR, n = 76) were classified as having schizophrenia at 10 years. Many variables were associated with change to schizophrenia but few with overall change in diagnosis.
Diagnoses other than schizophrenia should to be regarded as potentially provisional.
'One Health' investigation: outbreak of human Salmonella Braenderup infections traced to a mail-order hatchery – United States, 2012–2013
J. H. NAKAO, J. PRINGLE, R. W. JONES, B. E. NIX, J. BORDERS, G. HESELTINE, T. M. GOMEZ, B. McCLUSKEY, C. S. RONEY, D. BRINSON, M. ERDMAN, A. McDANIEL, C. BARTON BEHRAVESH
Journal: Epidemiology & Infection / Volume 143 / Issue 10 / July 2015
Human salmonellosis linked to contact with live poultry is an increasing public health concern. In 2012, eight unrelated outbreaks of human salmonellosis linked to live poultry contact resulted in 517 illnesses. In July 2012, PulseNet, a national molecular surveillance network, reported a multistate cluster of a rare strain of Salmonella Braenderup infections which we investigated. We defined a case as infection with the outbreak strain, determined by pulsed-field gel electrophoresis, with illness onset from 25 July 2012–27 February 2013. Ill persons and mail-order hatchery (MOH) owners were interviewed using standardized questionnaires. Traceback and environmental investigations were conducted. We identified 48 cases in 24 states. Twenty-six (81%) of 32 ill persons reported live poultry contact in the week before illness; case-patients named 12 different MOHs from eight states. The investigation identified hatchery D as the ultimate poultry source. Sampling at hatchery D yielded the outbreak strain. Hatchery D improved sanitation procedures and pest control; subsequent sampling failed to yield Salmonella. This outbreak highlights the interconnectedness of humans, animals, and the environment and the importance of industry knowledge and involvement in solving complex outbreaks. Preventing these infections requires a 'One Health' approach that leverages expertise in human, animal, and environmental health.
By William Andrefsky, Loukas Barton, Charlotte Beck, Robert L. Bettinger, Chris Clarkson, Nicole Crossland, Lara Cueni, Jennifer M. Ferris, Raven Garvey, Nathan Goodale, Clair Harris, Lucille E. Harris, Michael Haslam, Brooke Hundtoft, Terry L. Hunt, George T. Jones, Steven L. Kuhn, Ian Kuijt, Carl P. Lipo, R. Lee Lyman, D. Shane Miller, Christopher Morgan, Michael J. O'Brien, Curtis Osterhoudt, Anna Marie Prentiss, Colin P. Quinn, Michael Shott, Nathan E. Stevens, Todd L. VanPool
Edited by Nathan Goodale, Hamilton College, New York, William Andrefsky, Jr, Washington State University
Book: Lithic Technological Systems and Evolutionary Theory
Print publication: 22 January 2015, pp xiii-xvi
17 - The environmental and climatic impacts of volcanic ash deposition
from Part Three - Modes of volcanically induced global environmental change
By Morgan T. Jones
Edited by Anja Schmidt, University of Cambridge, Kirsten Fristad, Western Washington University, Linda Elkins-Tanton, Arizona State University
Book: Volcanism and Global Environmental Change
Print publication: 08 January 2015, pp 260-274
By Alessandro Aiuppa, Nick T. Arndt, Jean Besse, Benjamin A. Black, Terrence J. Blackburn, Nicole Bobrowski, Samuel A. Bowring, Seth D. Burgess, Kevin Burke, Ying Cui, Vincent Courtillot, Amy Donovan, Linda T. Elkins-Tanton, Anna Fetisova, Frédéric Fluteau, Kirsten E. Fristad, Lori S. Glaze, Thor H. Hansteen, Morgan T. Jones, Jeffrey T. Kiehl, Nadezhda A. Krivolutskaya, Kirstin Krüger, Lee R. Kump, Steffen Kutterolf, Dimitry V. Kuzmin, Jean-François Lamarque, A. Latyshev, Kimberly V. Lau, Tamsin A. Mather, Katja M. Meyer, Clive Oppenheimer, Vladimir Pavlov, Jonathan L. Payne, Ingrid Ukstins Peate, David Pieri, Sverre Planke, Ulrich Platt, Alexander Polozov, Fred Prata, Gemma Prata, David M. Pyle, Andy Ridgwell, Alan Robock, Ellen K. Schaal, Anja Schmidt, Stephen Self, Christine Shields, Juan Carlos Silva-Tamayo, Alexander V. Sobolev, Stephan V. Sobolev, Henrik Svensen, Trond H. Torsvik, Roman Veselovskiy
Print publication: 08 January 2015, pp viii-xii
Modelling the epidemic spread of an H1N1 influenza outbreak in a rural university town
N. K. VAIDYA, M. MORGAN, T. JONES, L. MILLER, S. LAPIN, E. J. SCHWARTZ
Published online by Cambridge University Press: 17 October 2014, pp. 1610-1620
Knowledge of mechanisms of infection in vulnerable populations is needed in order to prepare for future outbreaks. Here, using a unique dataset collected during a 2009 outbreak of influenza A(H1N1)pdm09 in a university town, we evaluated mechanisms of infection and identified that an epidemiological model containing partial protection of susceptibles best describes H1N1 dynamics in a rural university environment. We found that the protected group was over 14 times less susceptible to H1N1 infection than unprotected susceptibles. Our estimates show that the basic reproductive rate, R 0, was 5·96 (95% confidence interval 5·83–6·61), and, importantly, R 0 could be decreased to below 1 and similar epidemics could be avoided by increasing the proportion of the initial protected group. Moreover, several weeks into the epidemic, this protected group generated more new infections than the unprotected susceptible group, and thus, such protected groups should be taken into account while studying influenza epidemics in similar settings.
Reappraising the long-term course and outcome of psychotic disorders: the AESOP-10 study – CORRIGENDUM
C. Morgan, J. Lappin, M. Heslin, K. Donoghue, B. Lomas, U. Reininghaus, A. Onyejiaka, T. Croudace, P. B. Jones, R. M. Murray, P. Fearon, G. A. Doody, P. Dazzan
Published online by Cambridge University Press: 15 April 2014, p. 2727
Reappraising the long-term course and outcome of psychotic disorders: the AESOP-10 study
Studies of the long-term course and outcome of psychoses tend to focus on cohorts of prevalent cases. Such studies bias samples towards those with poor outcomes, which may distort our understanding of prognosis. Long-term follow-up studies of epidemiologically robust first-episode samples are rare.
AESOP-10 is a 10-year follow-up study of 557 individuals with a first episode of psychosis initially identified in two areas in the UK (South East London and Nottingham). Detailed information was collated on course and outcome in three domains (clinical, social and service use) from case records, informants and follow-up interviews.
At follow-up, of 532 incident cases identified, at baseline 37 (7%) had died, 29 (6%) had emigrated and eight (2%) were excluded. Of the remaining 458, 412 (90%) were traced and some information on follow-up was collated for 387 (85%). Most cases (265, 77%) experienced at least one period of sustained remission; at follow-up, 141 (46%) had been symptom free for at least 2 years. A majority (208, 72%) of cases had been employed for less than 25% of the follow-up period. The median number of hospital admissions, including at first presentation, was 2 [interquartile range (IQR) 1–4]; a majority (299, 88%) were admitted a least once and a minority (21, 6%) had 10 or more admissions. Overall, outcomes were worse for those with a non-affective diagnosis, for men and for those from South East London.
Sustained periods of symptom remission are usual following first presentation to mental health services for psychosis, including for those with a non-affective disorder; almost half recover.
MALT90: The Millimetre Astronomy Legacy Team 90 GHz Survey
J. M. Jackson, J. M. Rathborne, J. B. Foster, J. S. Whitaker, P. Sanhueza, C. Claysmith, J. L. Mascoop, M. Wienen, S. L. Breen, F. Herpin, A. Duarte-Cabral, T. Csengeri, S. N. Longmore, Y. Contreras, B. Indermuehle, P. J. Barnes, A. J. Walsh, M. R. Cunningham, K. J. Brooks, T. R. Britton, M. A. Voronkov, J. S. Urquhart, J. Alves, C. H. Jordan, T. Hill, S. Hoq, S. C. Finn, I. Bains, S. Bontemps, L. Bronfman, J. L. Caswell, L. Deharveng, S. P. Ellingsen, G. A. Fuller, G. Garay, J. A. Green, L. Hindson, P. A. Jones, C. Lenfestey, N. Lo, V. Lowe, D. Mardones, K. M. Menten, V. Minier, L. K. Morgan, F. Motte, E. Muller, N. Peretto, C. R. Purcell, P. Schilke, Schneider-N. Bontemps, F. Schuller, A. Titmarsh, F. Wyrowski, A. Zavagno
Published online by Cambridge University Press: 26 November 2013, e057
The Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey aims to characterise the physical and chemical evolution of high-mass star-forming clumps. Exploiting the unique broad frequency range and on-the-fly mapping capabilities of the Australia Telescope National Facility Mopra 22 m single-dish telescope 1 , MALT90 has obtained 3′ × 3′ maps towards ~2 000 dense molecular clumps identified in the ATLASGAL 870 μm Galactic plane survey. The clumps were selected to host the early stages of high-mass star formation and to span the complete range in their evolutionary states (from prestellar, to protostellar, and on to $\mathrm{H\,{\scriptstyle {II}}}$ regions and photodissociation regions). Because MALT90 mapped 16 lines simultaneously with excellent spatial (38 arcsec) and spectral (0.11 km s−1) resolution, the data reveal a wealth of information about the clumps' morphologies, chemistry, and kinematics. In this paper we outline the survey strategy, observing mode, data reduction procedure, and highlight some early science results. All MALT90 raw and processed data products are available to the community. With its unprecedented large sample of clumps, MALT90 is the largest survey of its type ever conducted and an excellent resource for identifying interesting candidates for high-resolution studies with ALMA.
Bilateral hippocampal increase following first-episode psychosis is associated with good clinical, functional and cognitive outcomes
J. M. Lappin, C. Morgan, S. Chalavi, K. D. Morgan, A. A. T. S. Reinders, P. Fearon, M. Heslin, J. Zanelli, P. B. Jones, R. M. Murray, P. Dazzan
Journal: Psychological Medicine / Volume 44 / Issue 6 / April 2014
Hippocampal pathology has been proposed to underlie clinical, functional and cognitive impairments in schizophrenia. The hippocampus is a highly plastic brain region; examining change in volume, or change bilaterally, over time, can advance understanding of the substrate of recovery in psychosis.
Magnetic resonance imaging and outcome data were collected at baseline and 6-year follow-up in 42 first-episode psychosis subjects and 32 matched controls, to investigate whether poorer outcomes are associated with loss of global matter and hippocampal volumes. Bilateral hippocampal increase (BHI) over time, as a marker of hippocampal plasticity was hypothesized to be associated with better outcomes. Regression analyses were performed on: (i) clinical and functional outcomes with grey matter volume change and BHI as predictor variables; and (ii) cognitive outcome with BHI as predictor.
BHI was present in 29% of psychosis participants. There was no significant grey matter loss over time in either patient or control groups. Less severe illness course and lesser symptom severity were associated with BHI, but not with grey matter change. Employment and global function were associated with BHI and with less grey matter loss. Superior delayed verbal recall was also associated with BHI.
BHI occurs in a minority of patients following their first psychotic episode and is associated with good outcome across clinical, functional and cognitive domains.
Modelling the interplay between childhood and adult adversity in pathways to psychosis: initial evidence from the AESOP study
C. Morgan, U. Reininghaus, P. Fearon, G. Hutchinson, K. Morgan, P. Dazzan, J. Boydell, J. B. Kirkbride, G. A. Doody, P. B. Jones, R. M. Murray, T. Craig
Journal: Psychological Medicine / Volume 44 / Issue 2 / January 2014
There is evidence that a range of socio-environmental exposures is associated with an increased risk of psychosis. However, despite the fact that such factors probably combine in complex ways to increase risk, the majority of studies have tended to consider each exposure separately. In light of this, we sought to extend previous analyses of data from the AESOP (Aetiology and Ethnicity in Schizophrenia and Other Psychoses) study on childhood and adult markers of disadvantage to examine how they combine to increase risk of psychosis, testing both mediation (path) models and synergistic effects.
All patients with a first episode of psychosis who made contact with psychiatric services in defined catchment areas in London and Nottingham, UK (n = 390) and a series of community controls (n = 391) were included in the AESOP study. Data relating to clinical and social variables, including parental separation and loss, education and adult disadvantage, were collected from cases and controls.
There was evidence that the effect of separation from, but not death of, a parent in childhood on risk of psychosis was partially mediated through subsequent poor educational attainment (no qualifications), adult social disadvantage and, to a lesser degree, low self-esteem. In addition, there was strong evidence that separation from, but not death of, a parent combined synergistically with subsequent disadvantage to increase risk. These effects held for all ethnic groups in the sample.
Exposure to childhood and adult disadvantage may combine in complex ways to push some individuals along a predominantly sociodevelopmental pathway to psychosis.
Individualized prediction of illness course at the first psychotic episode: a support vector machine MRI study
J. Mourao-Miranda, A. A. T. S. Reinders, V. Rocha-Rego, J. Lappin, J. Rondina, C. Morgan, K. D. Morgan, P. Fearon, P. B. Jones, G. A. Doody, R. M. Murray, S. Kapur, P. Dazzan
Journal: Psychological Medicine / Volume 42 / Issue 5 / May 2012
Published online by Cambridge University Press: 07 November 2011, pp. 1037-1047
To date, magnetic resonance imaging (MRI) has made little impact on the diagnosis and monitoring of psychoses in individual patients. In this study, we used a support vector machine (SVM) whole-brain classification approach to predict future illness course at the individual level from MRI data obtained at the first psychotic episode.
One hundred patients at their first psychotic episode and 91 healthy controls had an MRI scan. Patients were re-evaluated 6.2 years (s.d.=2.3) later, and were classified as having a continuous, episodic or intermediate illness course. Twenty-eight subjects with a continuous course were compared with 28 patients with an episodic course and with 28 healthy controls. We trained each SVM classifier independently for the following contrasts: continuous versus episodic, continuous versus healthy controls, and episodic versus healthy controls.
At baseline, patients with a continuous course were already distinguishable, with significance above chance level, from both patients with an episodic course (p=0.004, sensitivity=71, specificity=68) and healthy individuals (p=0.01, sensitivity=71, specificity=61). Patients with an episodic course could not be distinguished from healthy individuals. When patients with an intermediate outcome were classified according to the discriminating pattern episodic versus continuous, 74% of those who did not develop other episodes were classified as episodic, and 65% of those who did develop further episodes were classified as continuous (p=0.035).
We provide preliminary evidence of MRI application in the individualized prediction of future illness course, using a simple and automated SVM pipeline. When replicated and validated in larger groups, this could enable targeted clinical decisions based on imaging data.
The varying impact of type, timing and frequency of exposure to childhood adversity on its association with adult psychotic disorder
H. L. Fisher, P. B. Jones, P. Fearon, T. K. Craig, P. Dazzan, K. Morgan, G. Hutchinson, G. A. Doody, P. McGuffin, J. Leff, R. M. Murray, C. Morgan
Journal: Psychological Medicine / Volume 40 / Issue 12 / December 2010
Childhood adversity has been associated with onset of psychosis in adulthood but these studies have used only general definitions of this environmental risk indicator. Therefore, we sought to explore the prevalence of more specific adverse childhood experiences amongst those with and without psychotic disorders using detailed assessments in a large epidemiological case-control sample (AESOP).
Data were collected on 182 first-presentation psychosis cases and 246 geographically matched controls in two UK centres. Information relating to the timing and frequency of exposure to different types of childhood adversity (neglect, antipathy, physical and sexual abuse, local authority care, disrupted living arrangements and lack of supportive figure) was obtained using the Childhood Experience of Care and Abuse Questionnaire.
Psychosis cases were three times more likely to report severe physical abuse from the mother that commenced prior to 12 years of age, even after adjustment for other significant forms of adversity and demographic confounders. A non-significant trend was also evident for greater prevalence of reported severe maternal antipathy amongst those with psychosis. Associations with maternal neglect and childhood sexual abuse disappeared after adjusting for maternal physical abuse and antipathy. Paternal maltreatment and other forms of adversity were not associated with psychosis nor was there evidence of a dose–response effect.
These findings suggest that only specific adverse childhood experiences are associated with psychotic disorders and only in a minority of cases. If replicated, this greater precision will ensure that research into the mechanisms underlying the pathway from childhood adversity to psychosis is more fruitful.
By Amelia Evoli, Ami K. Mankodi, Ana Ferreiro, Anders Oldfors, Anne K. Lampe, Anneke J. van der Kooi, Bernard Brais, Bertrand Fontaine, Bjarne Udd, Carina Wallgren-Pettersson, Caroline A. Sewry, Carsten G. Bönnemann, Cecilia Jimenez-Mallebera, Chad Heatwole, Charles A. Thornton, Corrado Angelini, David Hilton-Jones, Doreen Fialho, Duygu Selcen, Edward J. Cupler, Emma Ciafaloni, Enrico Bertini, Eric A. Shoubridge, Eric Logigian, Erin O'Ferrall, Eugenio Mercuri, Franco Taroni, Frank L. Mastaglia, Frederic Relaix, George Karpati, Giovanni Meola, Gisèle Bonne, Hannah R. Briemberg, Hanns Lochmüller, Heinz Jungbluth, Ichizo Nishino, Jenny E. Morgan, John Day, John Vissing, John T. Kissel, Kate Bushby, Leslie Morrison, Maria J. Molnar, Marianne de Visser, Marinos C. Dalakas, Mary Kay Floeter, Mariz Vainzof, Maxwell S. Damian, Michael G. Hanna, Michael Rose, Michael Sinnreich, Michael Swash, Miranda D. Grounds, Mohammed Kian Salajegheh, Nigel G. Laing, Patrick F. Chinnery, Rabi Tawil, Rénald Gilbert, Richard Orrell, Robert C. Griggs, Roberto Massa, Saiju Jacob, Shannon L. Venance, Stefano Di Donato, Stella Mitrani-Rosenbaum, Stephen Gee, Stuart Viegas, Susan C. Brown, Tahseen Mozaffar, Tanja Taivassalo, Valeria A. Sansone, Violeta Mihaylova, Yaacov Anziska, Zohar Argov
George Karpati, McGill University, Montréal
Edited by David Hilton-Jones, Kate Bushby, Robert C. Griggs
Book: Disorders of Voluntary Muscle
Print publication: 21 January 2010, pp vii-x
Cumulative social disadvantage, ethnicity and first-episode psychosis: a case-control study
C. Morgan, J. Kirkbride, G. Hutchinson, T. Craig, K. Morgan, P. Dazzan, J. Boydell, G. A. Doody, P. B. Jones, R. M. Murray, J. Leff, P. Fearon
Numerous studies have reported high rates of psychosis in the Black Caribbean population in the UK. Recent speculation about the reasons for these high rates has focused on social factors. However, there have been few empirical studies. We sought to compare the prevalence of specific indicators of social disadvantage and isolation, and variations by ethnicity, in subjects with a first episode of psychosis and a series of healthy controls.
All cases with a first episode of psychosis who made contact with psychiatric services in defined catchment areas in London and Nottingham, UK and a series of community controls were recruited over a 3-year period. Data relating to clinical and social variables were collected from cases and controls.
On all indicators, cases were more socially disadvantaged and isolated than controls, after controlling for potential confounders. These associations held when the sample was restricted to those with an affective diagnosis and to those with a short prodrome and short duration of untreated psychosis. There was a clear linear relationship between concentrated disadvantage and odds of psychosis. Similar patterns were evident in the two main ethnic groups, White British and Black Caribbean. However, indicators of social disadvantage and isolation were more common in Black Caribbean subjects than White British subjects.
We found strong associations between indicators of disadvantage and psychosis. If these variables index exposure to factors that increase risk of psychosis, their greater prevalence in the Black Caribbean population may contribute to the reported high rates of psychosis in this population.
Minor physical anomalies in patients with first-episode psychosis: their frequency and diagnostic specificity
T. Lloyd, P. Dazzan, K. Dean, S. B. G. Park, P. Fearon, G. A. Doody, J. Tarrant, K. D. Morgan, C. Morgan, G. Hutchinson, J. Leff, G. Harrison, R. M. Murray, P. B. Jones
Published online by Cambridge University Press: 30 July 2007, pp. 71-77
An increased prevalence of minor physical anomalies (MPAs) has been extensively documented in schizophrenia but their specificity for the disorder remains unclear. We investigated the prevalence and the predictive power of MPAs in a large sample of first-episode psychotic patients across a range of diagnoses.
MPAs were examined in 242 subjects with first-episode psychosis (50% schizophrenia, 45% affective psychosis and 5% substance-induced psychosis) and 158 healthy controls. Categorical principal components analysis and analysis of variance were undertaken, and individual items with the highest loading were tested using the χ2 test.
Overall facial asymmetry, assymetry of the orbital landmarks, and frankfurt horizontal significantly differentiated patients with schizophrenia and affective psychosis from controls, as did a 'V-shaped' palate, reduced palatal ridges, abnormality of the left ear surface and the shape of the left and right ears. Patients with affective psychosis had significantly lowered eye fissures compared with control subjects.
MPAs are not specific to schizophrenia, suggesting a common developmental pathway for non-affective and affective psychoses. The topographical distribution of MPAs in this study is suggestive of an insult occurring during organogenesis in the first trimester of pregnancy.
Aggressive behaviour at first contact with services: findings from the AESOP First Episode Psychosis Study
K. DEAN, E. WALSH, C. MORGAN, A. DEMJAHA, P. DAZZAN, K. MORGAN, T. LLOYD, P. FEARON, P. B. JONES, R. M. MURRAY
Background. Aggressive behaviour is increased among those with schizophrenia but less is known about those with affective psychoses. Similarly, little is known about aggressive behaviour occurring at the onset of illness.
Method. The main reasons for presentation to services were examined among those recruited to a UK-based first episode psychosis study. The proportion of individuals presenting with aggressive behaviour was determined and these individuals were compared to those who were not aggressive on a range of variables including sociodemographic, clinical, criminal history, service contact, and symptom characteristics. Among the aggressive group, those who were physically violent were distinguished from those who were not violent but who were still perceived to present a risk of violence to others.
Results. Almost 40% (n=194) of the sample were aggressive at first contact with services; approximately half of these were physically violent (n=103). Younger age, African-Caribbean ethnicity and a history of previous violent offending were independently associated with aggression. Aggressive behaviour was associated with a diagnosis of mania and individual manic symptoms were also associated with aggression both for the whole sample and for those with schizophrenia. Factors differentiating violent from non-violent aggressive patients included male gender, lower social class and past violent offending.
Conclusions. Aggressive behaviour is not an uncommon feature in those presenting with first episode psychosis. Sociodemographic and past offending factors are associated with aggression and further differentiate those presenting with more serious violence. A diagnosis of mania and the presence of manic symptoms are associated with aggression.
Crohn's disease in people exposed to clinical cases of bovine paratuberculosis
P. H. JONES, T. B. FARVER, B. BEAMAN, B. ÇETINKAYA, K. L. MORGAN
Journal: Epidemiology & Infection / Volume 134 / Issue 1 / February 2006
Mycobacterium avium subspecies paratuberculosis (Map), the cause of ruminant paratuberculosis, has been proposed as the causative agent of Crohn's disease. The objective of this study was to determine whether exposure to clinical cases of bovine paratuberculosis was a risk factor for Crohn's disease. A questionnaire was sent to dairy farmers living on premises where the occurrence or absence of clinical cases of bovine paratuberculosis had previously been determined. The prevalence of Crohn's disease was found to be similar to that reported in other studies in the United Kingdom and showed no association with bovine paratuberculosis. There was, however, a univariate association with geographical region. Ulcerative colitis showed univariate associations with age, frequency of contact with cattle and with smoking. The results do not support the hypothesis that Map plays a causative role in the aetiology of Crohn's disease.
Primary blasting in a limestone quarry: physicochemical characterization of the dust clouds
T. Jones, A. Morgan, R. Richards
Journal: Mineralogical Magazine / Volume 67 / Issue 2 / April 2003
Airborne dust generated by primary blasting was collected in Taffs Well Quarry, just north of Cardiff, Wales. Collections of airborne particulate matter were also made in the nearby village of Morganstown at the same time as the blasting collections. The explosions were recorded on a motor-driven camera and a digital video camera. These images show that the dust clouds generated by the explosions consist of three distinct components; a reddish-grey dust cloud, followed by a light grey dust cloud, and finally a pale grey cloud that stayed near the blast face. It is believed that the reddish-grey cloud was composed mostly of mineral grains, as evidenced by the chiefly red colour of the dolomitic limestone rock in the quarry. The whiter clouds contained more explosive combustion particles (diesel soot). The samples were studied by analytical scanning electron microscopy, very high-resolution field emission scanning electron microscopy, and image analysis. The two different components (minerals and diesel soot) can be readily seen under high-resolution electron microscopy. Any consideration of the possible adverse health effects or nuisance value of this dust needs to consider both of these components. A size distribution of the quarry particles shows that soot particles dominate the assemblage under 2 mm, whereas the mineral grains are more abundant over 2 mm. This contrasts with the Morganstown particle sizes, where the two components show similar size distributions. The determination of the mineralogy of the quarry dust and Morganstown particles has shown highly complex and heterogeneous mixtures, though some distribution patterns are emerging. It is concluded that much of the dust in Morganstown probably originated from sources other than the quarry, such as other local industries, roads and construction sites.
|
CommonCrawl
|
Belief distorted Nash equilibria: introduction of a new kind of equilibrium in dynamic games with distorted information
S.I. : Optimization Models with Econ. & Game Th. App.
Agnieszka Wiszniewska-Matyszkiel1
Annals of Operations Research volume 243, pages 147–177 (2016)Cite this article
In this paper the concept of belief distorted Nash equilibrium (BDNE) is introduced. It is a new concept of equilibrium for games in which players have incomplete, ambiguous or distorted information about the game they play, especially in a dynamic context. The distortion of information of a player concerns the fact how the other players and/or an external system changing in response to players' decisions, are going to react to his/her current decision. The concept of BDNE encompasses a broader concept of pre-BDNE, which reflects the fact that players best respond to their beliefs, and self-verification of those beliefs. The relations between BDNE and Nash or subjective equilibria are examined, as well as the existence and properties of BDNE. Examples are presented, including models of a common ecosystem, repeated Cournot oligopoly, a repeated Minority Game or local public good with congestion effect and a repeated Prisoner's Dilemma.
It is claimed that Nash equilibrium is the most important concept in noncooperative game theory.
Indeed, it is the only solution concept that can be sustained whenever rational players have full information about the game in which they participate: the number of players involved, their strategy sets and payoff functions while in the case of dynamic games, also the dynamics of the underlying system.
Nash equilibrium ceases to be a solution whenever at least one of the above is unknown, especially if the information about it is distorted.
On the other hand, incomplete, ambiguous or even distorted information is a really important problem of contemporaneity.
There are many attempts to extend the concept of Nash equilibria to make it work in the case of incomplete information: among others, Bayesian equilibria introduced by Harsanyi (1967), correlated equilibria introduced by Aumann (1974, 1987), \(\Delta \)-rationalizability introduced by Battigalli and Siniscalchi (2003), self-confirming equilibria introduced by Fudenberg and Levine (1993) and subjective equilibria introduced by Kalai and Lehrer (1993, 1995).
These and some other equilibrium concepts tackling the problem of incomplete information are described in a more detailed way in "Other concepts of equilibria taking incomplete, ambiguous or distorted information into account" section in Appendix 3.
All those concepts are based on two obvious basic assumptions:
players best respond to their beliefs;
beliefs cannot be contradicted during the play (or repeated play) if players choose strategies maximizing their expected payoffs given those beliefs.
However, those concepts are not defined in a form applicable to dynamic games.
Moreover, all those concepts besides subjective equilibrium do not take into account players' information about the game that can be not only incomplete, but severely distorted. For example, they do not take into account the situations in which even the number of players is not known correctly. Besides, none of those concepts copes with ambiguity.
To fill in this gap, the concept of belief distorted Nash equilibrium (BDNE) is introduced. It is defined in a form which can be applied both to repeated and dynamic games. It takes into account players' information which can be incomplete, ambiguous or even distorted. Nevertheless, it is based on the same two underlying assumptions which are fulfilled by all the incomplete information equilibrium concepts. It seems to be especially appropriate for games with many players.
Another interesting property of BDNE is the fact that this concept reflects the character of scientific modelling in problems of a special structure, which often appears especially in socio-economic or cognitive context.
In that class of problems there are three aspects:
optimization of players without full objective knowledge about parameters which influence the result of this optimization,
the fact that decision makers try to build (or adopt from some sources) a model of this unknown reality in order to use it for their optimization, and
the fact that the future behaviour of observable data which are used to verify the model is a consequence of players' choices (which, in turn, are consequences of initial choice of the model).
In such a case a model of reality is proposed (this scientific model in our paper is related to as beliefs), players best respond to their beliefs and data collected afterwards are influenced by previous choices of the players (e.g. prices in a model of stock exchange, state of the resource in ecological economics models). Obviously, if, in light of collected data, the model seems correct, there is no need to change it. Consequently, false knowledge about reality may sustain and people may believe that they play a Nash equilibrium.
The most illustrative for the phenomenon we want to focus on, is a real life problem—the ozone hole problem caused by emission of fluorocarbonates (freons, CFCs).
After discovering the ozone hole, the cause of it and possible consequences for the global society in future, ecologists suggested to decrease the emission of CFCs, among others by stopping using deodorants containing them.
Making such a decision seemed highly unreasonable for each individual since his/her influence on the global emission of CFCs and, consequently, the ozone layer is negligible.
Nevertheless, ecologists made some consumers believe that they are not negligible. Those consumers reduced, among others, usage of deodorants containing CFCs.
Afterwards, the global level of emission of CFCs decreased and the ozone hole stopped expanding and, as it is claimed now, it started to shrink.
Whatever the mechanism is, the belief "if I decide not to use deodorants containing freons, then the ozone hole will shrink" represented by some consumers can be regarded by them as verified by the empirical evidence.
Since the actual mathematics behind the ozone hole problem is quite complicated (among others, the equation determining the size of the ozone layer contains a delay), in this paper we use another ecological Example 1, in which the result of human decisions may be disastrous to the whole society, but with much simpler dynamics. The example illustrates the concept before detailed formal definitions in general framework.
The paper is designed as follows: before the formal introduction of the model (in Sect. 3) and concepts (in Sect. 4), a non-technical introduction is given in Sect. 2. The notation in Sect. 2 is reduced to the minimal level at which all the reasonings from Sect. 4, devoted to definitions and properties of the concepts, and Sect. 5, with analysis of examples, can be understood, so that the readers who are not mathematically oriented and the readers who want to become acquainted with the concepts first, can read instead or before reading the formal introduction.
The concepts of pre-BDNE and BDNE are compared to Nash equilibria and subjective equilibria in Sect. 4, just after definitions and theorems about properties and existence.
The concepts introduced in this paper are presented and compared to another concepts of equilibria—Nash and subjective equilibria—using the following examples.
1. A simple ecosystem constituting a common property of its users. We assume that the number of users is large and that every player may have problems with assessing his/her marginal influence on the aggregate extraction and, consequently, the future trajectory of the state of the resource.
This example first appears before formal introduction of the model and concepts as a clarifying example in Sect. 2.2, and all the concepts are explained and their interesting properties described using this example.
2. A repeated Minority Game being a modification of the El Farol problem. There are players who choose each time whether to stay at home or to go to the bar. If the bar is overcrowded, then it is better to stay at home, the less it is crowded the better it is to go.
The same model can describe also problems of so called local public goods like e.g. communal beach or public transportation, in which there is no possibility of exclusion while congestion decreases the utility of consumption.
3. A model of a market describing either Cournot oligopoly or competitive market. These two cases appear as one model also in Kalai and Lehrer (1995) because of the same reason as in this paper: players may have problems with assessing their actual share in the market and, therefore, their actual influence on prices and we check the possibility of obtaining e.g. competitive equilibrium in Cournot oligopoly as the result of distortion of information.
4. A repeated Prisoner's Dilemma. At each stage each of two players assesses possible future reactions of the other player to his/her decision to cooperate or defect at this stage.
Another obvious examples for which the BDNE concept seems natural are e.g. taboos in primitive societies, formation of traffic jams, stock exchange and our motivating example of the ozone hole problem together with all the media campaigns concerning it.
For clarity of exposition, more technical proofs, a detailed review of other concepts of equilibria with incomplete, distorted or ambiguous information, introduction to and discussion about specific form of payoff function and games with many players are moved to the Appendices.
A non-technical introduction of the model and concepts
This section is intended for the readers who want to become acquainted with the general idea of the BDNE concept and the way it functions. It contains a brief extract from definitions and notation necessary to follow the way of reasoning and it explains everything using a clarifying example.
In this section technical details are skipped.
The full formal introduction of the mathematical model is in Sect. 3.
Before detailed introduction of the model we briefly describe it.
We consider a discrete time dynamic game (which includes also the class of repeated games as a simpler case) with the set of players \(\mathbb {I}\), possibly infinite, the set of players' decisions \(\mathbb {D}\) with sets of available decisions of each player changing during the game (which is described by a correspondence \(D_{i}\)) and the payoff functions of players \(\Pi _{i}\) being the sum of discounted instantaneous payoffs \(P_{i}\) and a terminal payoff \(G_{i}\).
Since the game is dynamic, it is played in a system \(\mathbb {X}\), whose rules of behaviour are described by a function \(\phi \) dependent on a finitely dimensional statistic of profiles of players' strategies u, e.g. the aggregate or average of the profile.
The same statistic is, together with player's own strategy and trajectory of the state variable, sufficient information to calculate payoff of this player.
Past states and statistics and current state are known to the players at each time instant. More specific information about other players' strategies is not observable at any stage.
This ends the definition of the "usual" game.
Besides, at each time instant, given observations of states and statistics (called histories) of the game, players formulate their beliefs \(B_{i}\) of future behaviour of states and statistics (called future histories), being sets of trajectories of these parameters regarded as possible.
For conciseness of notation, actual and future histories are coded in one object, called history and denoted usually by H.
Beliefs define anticipated payoffs \(\Pi _{i}^e(t,\cdot )\) being the sum of the actual instantaneous payoff at time t and the guaranteed (with respect to the belief) value of future optimal payoff \(V_{i}\). The guaranteed payoff is the infinum over future histories in the belief, of the optimal future payoff for such a history, denoted by \(v_{i}\).
The word "anticipated" is used in the colloquial meaning of "expected", while the word "expected" is not used in order to avoid associations with expected value with respect to some probability distribution, since we want to concentrate on ambiguity. A discussion why such forms of beliefs and payoffs are considered, is contained in "The form of beliefs and anticipated payoffs considered in this paper" section in Appendix 3.
This completes the definition of game with distorted information.
At a Nash equilibrium, in the usual game, every player maximizes his/her payoff given strategy choices of the remaining players—he/she best responds to his/her information about behaviour of the remaining players, and this information is perfect.
At a pre-BDNE, analogously, every player best responds to his/her information about the behaviour of the remaining players, while this information can be partial, distorted or ambiguous. This is represented by maximization of anticipated payoff at each stage of the game.
At a BDNE, beliefs cannot be falsified during the subsequent play—a profile is a BDNE if it is a pre-BDNE and at every stage of the game, the belief set formed at that stage contains the actual history of the game.
A clarifying example
Since the original ozone hole example has too complicated dynamics to illustrate the concepts used in this paper, we use another global ecological problem with simpler dynamics. We consider an example of exploitation of a common ecosystem from Wiszniewska-Matyszkiel (2005) in a new light of games with distorted information.
In this example the natural renewable resource is crucial for the society of its users, therefore exhausting it results in starvation.
Common ecosystem Let us consider a model of a common ecosystem exploited by a large group of users, which is modelled as a game with either n players with the normalized counting measure or the set of players represented by the unit interval with the Lebesgue measure—this measure is denoted by \(\lambda \), while the set of players by \(\mathbb {I}\).
As it is discussed, among others in Wiszniewska-Matyszkiel (2005) and in Appendix 2, this form of measuring players makes the games with various numbers of players comparable—more players in such games do not mean that additional consumers of the resource entered the game, but it means that, with the same large number of actual consumers, the decision making process became more decentralized.
As the time set, we consider the set of nonnegative integers \(\mathbb {N}\).
In the simplest open loop formulation, the aim of player i is to maximize, over the set of his/her strategies \(S_i{:}\,\mathbb {N} \rightarrow \mathbb {R}_+\) Footnote 1
$$\begin{aligned} \sum _{t=0} ^{\infty } \ln (S_i (t))(1+r)^{-t}, \end{aligned}$$
under constraints
$$\begin{aligned} 0\le S_i(t)\le c \cdot X(t). \end{aligned}$$
The discount rate fulfils \(r>0\).
The trajectory of the state variable X corresponding to the whole strategy profile S is described by
$$\begin{aligned} X(t+1)=X(t)-\max (0,u^S(t)-\zeta X(t)) \end{aligned}$$
for \(\zeta >0\), called the rate of regeneration, and the initial state is \(X(0)=\bar{x}>0\).
The function \(u^S\) in the definition of X represents the aggregate extraction at various time instants, therefore it is defined by
$$\begin{aligned} u^S(t)=\int _{\mathbb {I}} S_i(t)d\lambda (i). \end{aligned}$$
We also consider more complicated closed loop strategies (dependent on the state variable) or history dependent strategies. In those cases the formulation changes in the obvious way.
Generally, the model makes sense for \(c\le (1+\zeta )\), but the most interesting results are obtained whenever \(c=1+\zeta \). Therefore, we consider this case.
In this case, the so called "tragedy of the commons" is present in a very drastic form—in the continuum of players case, we have total destruction of the resource at finite time, which never takes place in any of finitely many players cases.
The following results are proven in Wiszniewska-Matyszkiel (2005).
Let us consider the set of players being the unit interval with the Lebesgue measure.
No dynamic profile such that a set of players of positive measure get finite payoffs is an equilibrium, and every dynamic profile yielding the destruction of the system at finite timeFootnote 2 is a Nash equilibrium. At every Nash equilibrium, the payoff of a.e. player is \(-\infty \).
Let us consider the set of players being the n-element set with the normalized counting measure.
No dynamic profile yielding payoff equal to \(-\infty \) for any player is a Nash equilibrium.
The profile S defined by \(S_{i}(x)=\bar{z}_{n}\cdot x\) for \(\bar{z}_{n}=\max \left( \frac{nr(1+r)}{1+nr},\zeta \right) \) is the only symmetric closed loop Nash equilibrium.
If players have full information about the game they play, then the only profiles that can sustain in the game are Nash equilibria.
This fact is really disastrous in the continuum of players case, while we cannot expect destruction of the resource in any case with finitely many players.
There are natural questions, which also appear in the context of the ozone hole problem.
What happens if players—actual decision makers—do not know the actual game they play: the equation for the state trajectory, other players' payoff functions or extracting capacities, the number of players, or even their own marginal influence on the state variable?
What if they are susceptible to various campaigns?
What if they can only formulate some beliefs, not necessarily compatible with the actual structure of the game?
Can we always expect that false models of reality are falsified by real data if players use those beliefs as the input models of their optimization?
Obviously, in numerous games, especially games with many players, we cannot expect that every player can observe strategies of the other players, or he/she even knows exactly their strategy sets and payoff functions and he/she has the capability to process those data.
In this example with many participants, as well as in the ozone hole problem, with many millions of consumers, the expectation that the only observable variables besides player's own strategy are the aggregate extraction/emission and the state variable, seems justified. Moreover, the equation describing the behaviour of the state trajectory may be not known to the general public and it is susceptible to various campaigns.Footnote 3 In such a case objective knowledge about the phenomenon considered, even if available, is not necessarily treated as the only possible truth.
The questions the concept of BDNE can answer in the case of ecological problems like this example or the ozone hole problem, are:
How dangerous such campaigns can be? What is the result of formulating beliefs according to them?
Is it possible, that because players optimize given such beliefs those beliefs are verified by the play? In such a case those beliefs may be regarded by players as scientifically verified.
Is it possible to construct a campaign or make some information confidential to save the resource even in the case when complete information results in destruction of it? Can we obtain the n-player equilibrium, not resulting in destruction of the resource, in our continuum of players game?
Can such beliefs be regarded as confirmed by the play?
To illustrate the process of calculation of a BDNE or checking self-verification of beliefs we choose the n-player game, time instant t and players' beliefs that they are negligible but with different opinions about sustainability of the resource:
(a) beliefs of each player are "my decision has no influence on the state of the resource and at the next stage the resource will be depleted";
(b) beliefs of each player are "my decision has no influence on the state of the resource and it is possible that at the next stage the resource will be depleted";
(c) beliefs of each player are "my decision has no influence on the state of the resource and the level of the resource will be always as it is now".
This process consists of the following steps.
At each stage t of the game, starting from 0, each player formulates beliefs and calculate the anticipated payoff functions \(\Pi ^e(S,t)\) given history and their decision \(S_i(t)\) (for simplicity of notation we write open loop form of the profile).
It is a sum of actual current payoff plus discounted optimal payoff that a player can obtain if the worst possible scenario happens.
In both (a) and (b), the anticipated payoff is independent on player's own current decision and it is equal to \(- \infty \).
In (c), the guaranteed future payoff is also independent on player's own decision and the anticipated payoff is equal to
\(\ln (S_i(t))+(1+r)^{-t}\sum _{i=1}^{\infty } \ln ((1+\zeta )X(t))(1+r)^{-i+1}\), which is finite and the second part is independent of player's own decision.
A part of pre-BDNE profile, corresponding to players' choices at stage t is chosen—a static profile such that, for every player, his/her current decision maximizes anticipated payoff. It is a static Nash equilibrium problem.
In cases (a) and (b), every static profile can be chosen as maximizing anticipated payoff, therefore, every profile is a pre-BDNE for these beliefs.
In case (c), the only optimal choice of a player is \((1+\zeta )X(t)\), therefore the only pre-BDNE for this belief is defined by \(S_i(t)=(1+\zeta )X(t)\) for every t and i.
After repeating this for all time instants, we have profiles that are pre-BDNE.
Self-verification is checked.
In (a), only the profiles with \(S_i(0)=(1+\zeta )X(0)\) results in \(X(1)=0\), and, consequently \(X(t)=0\) for all \(t>0\). In fact, there is only one open loop profile with this property. Any other profile (closed loop, history dependent) is equivalent to it in the sense that decisions at each stage of the game are identical: at stage 0 everything is consumed while afterwards nothing is left.
So, in case (a), there is a single BDNE up to equivalence of open loop forms.
These beliefs have the property which we shall call potential self-verification. Therefore, it may happen that they will not be falsified during the play.
The same profile as in (a) is a BDNE in (b).
At this stage, we note that beliefs in case (b) are not precisely defined—if it is not stated what are the other options regarded as possible.
If we additionally assume that e.g. the history corresponding to the n-players equilibrium profile is always in the belief set, then it is another BDNE.
If we expand the belief set and assume that every future scenario is possible, in this specific case, we obtain that every profile is a BDNEFootnote 4 and that beliefs are perfectly self-verifying.
In (c), at every pre-BDNE profile, \(X(1)=0\), which was not in the belief set at time 0. Therefore there is no BDNE and the beliefs are not even potentially self-verifying.
For this example it can be also easily proven that the Nash equilibria from Proposition 1 are pre-BDNE for a special form of beliefs, called perfect foresight and, therefore, BDNE (see Theorem 4).
However, we are more interested in BDNE that are not Nash equilibria.
An interesting problem is to find a belief for which a BDNE does not lead to the destruction of the system even in the continuum of players case.
It turns out that a wide class of profiles at which the system is not destroyed at finite time, including the n-player Nash equilibrium, is a BDNE in the continuum of players game. Moreover, to make it a BDNE in the game with a continuum of players, it is enough to educate using apparently counter-intuitive beliefs a small subset of the set of players (Propositions 10 and 12 b).
This result has an obvious interpretation. In the case of the real life ozone hole problem, ecological education made some people sacrifice their instantaneous utility in order to protect the system. This may happen even if people constitute a continuum. It is enough that they believe their decisions really have influence on the system.
What is even more promising, the beliefs used to obtain such a non-destructive profile as a BDNE in the continuum of players case are perfectly self-verifying (Proposition 12 a).
Since precise formulations and proofs of these results are laborious, they are moved to "Common ecosystem" section in Appendix 1.
The opposite situation—obtaining a destructive continuum of players Nash equilibrium in the n-player game—as we see from the above analysis, is also possible.
If people believe that their influence on the system is like in the case when they constitute a continuum, then they behave like a continuum. Consequently, they may destroy the system in finite time at a pre-BDNE (Proposition 11). Moreover, such a destructive profile is a BDNE (Proposition 12 c).
Formulation of the model
In this section we introduce the model formally.
Those who are not interested in mathematical precision, can achieve general orientation about the model by reading Sect. 2.
The main environment in which we work is a game with distorted information, denoted by \({\mathfrak {G}}^{\text {dist}}\).
It is built on a structure of a dynamic game \({\mathfrak {G}}\) with set of players \(\mathbb {I}\), discrete time set \(\mathbb {T}\), set of possible states \(\mathbb {X}\) and payoffs \(\Pi _i\), being a discounted sum of instantaneous payoffs \(P_i\) and, in the case of finite time horizon, terminal payoff \(G_i\). An important parameter in the game is some profile statistic u, observable by the players and representing all the information about the profile that influences trajectory of the state variable and all information about the profile besides player's own strategy which influences his/her payoff.
The difference between the \({\mathfrak {G}}^{\text {dist}}\) and \({\mathfrak {G}}\) concerns payoffs.
In \({\mathfrak {G}}^{\text {dist}}\) at each stage player formulate beliefs \(B_i\), based on observation of trajectory of the state variable and past statistics and those beliefs are used to calculate anticipated payoffs \(\Pi _i\). Consequently, we obtain a sequence of subgames with distorted information \({\mathfrak {G}}^{\text {dist}}_{t,H}\).
However, the elements to define both kinds of games are almost all the same.
For the objective, dynamic game \({\mathfrak {G}}\) we need a tuple of the following objects \(((\mathbb {I},\mathfrak {I},\lambda ),\mathbb {T}, \mathbb {X}, (\mathbb {D},\mathcal {D)},\{D_{i}\}_{i\in \mathbb {I}},U,\phi , \{P_{i}\}_{i\in \mathbb {I}},\{G_{i}\}_{i\in \mathbb {I}},\{r_{i}\}_{i\in \mathbb {I}})\), while to define a game with distorted information \({\mathfrak {G}}^{\text {dist}}\) associated with it—\(((\mathbb {I},\mathfrak {I},\lambda ),\mathbb {T}, \mathbb {X}, (\mathbb {D},\mathcal {D)},\{D_{i}\}_{i\in \mathbb {I}},U,\phi ,\) \( \{P_{i}\}_{i\in \mathbb {I}},\{G_{i}\}_{i\in \mathbb {I}},\{r_{i}\}_{i\in \mathbb {I}},\{B_{i}\}_{i\in \mathbb {I}})\).
With this general description, we can start a detailed definition of both kinds of games.
The set of players is denoted by \(\mathbb {I}\). In order that the definitions of the paper could encompass not only games with finitely many players, but also games with a continuum of players, we introduce a structure on \(\mathbb {I}\) consisting of a \(\sigma \)-field \(\mathfrak {I}\) of its subsets and a measure \(\lambda \) on it. More information about games with a measure space of players can be found in Appendix 2.
The game is dynamic, played over a discrete time set \(\mathbb {T}\), without loss of generality \(\mathbb {T}=\{t_{0},t_{0}+1,\ldots ,T\}\) or \(\mathbb {T} =\{t_{0},t_{0}+1,\ldots \}\), which, for uniformity of notation, is treated as \(T=+\infty \). For the same reason, we introduce also the symbol \(\overline{\mathbb {T}}\) denoting \(\{t_{0},t_{0}+1,\ldots ,T+1\}\) for finite T and equal to \(\mathbb {T}\) in the opposite case.
The game is played in an environment (or system) with the set of states \(\mathbb {X}\). The state of the system (state for short) changes over time in response to players' decisions, constituting a trajectory X, whose equation is stated in the sequel. The set of all potential trajectories—functions \(X{:}\,\overline{\mathbb {T}}\rightarrow \mathbb {X}\)—is denoted by \(\mathfrak {X}\).
At each time t, given state x, player i chooses a decision from his set of available actions/decisions \(D_{i}(t,x) \subset \mathbb {D}\)—the set of (potential) actions/decisions of players. These available decision sets of player i constitute a correspondence of available decision sets of player i, \(D_{i}{:}\,\mathbb {T}\times \mathbb {X}\multimap \mathbb {D}\), while all available decision sets constitute a correspondence of available decision sets, \(D{:}\,\mathbb {I}\times \mathbb {T}\times \mathbb {X}\multimap \mathbb {D}\) with nonempty values. We also need a \(\sigma \)-field of subsets of \(\mathbb {D}\), denoted by \(\mathcal {D}\).
For any time t and state x, we call any measurable function \(\delta {:}\,\mathbb {I\rightarrow D}\) which is a selection from the correspondence \(D(\cdot ,t,x)\) a static profile available at t and x. The set of all static profiles available at t and x is denoted by \(\Sigma (t,x)\). We assume that all these sets of static profiles are nonempty.Footnote 5
The union of all the sets of static profiles available at various t and x is denoted by \(\Sigma \).
The definitions of a strategy (dynamic strategy) and a profile (dynamic profile) appear in the sequel, since first we have to define the domains of these functions.
The influence of a whole static profile \(\delta \) on the state variable is via its statistic.
Without loss of generality, the same statistic is the only parameter besides player's own action, influencing the value of his/her payoff.
Formally, a statistic is a function \(U{:}\,\Sigma \times \mathbb {X}\mathop {\rightarrow }\limits ^{\text {onto}}\mathbb {U}\subset \mathbb {R}^{m}\), defined by \(U(\delta ,x)=\left[ \int _{\mathbb {I}}g_{k}(i,\delta (i),x)d\lambda (i)\right] _{k=1}^{m}\), for a collection of functions \(g_{k}{:}\,\mathbb {I}\times \mathbb {D}\times \mathbb {X}\rightarrow \mathbb {R}\), which are \(\mathfrak {I}\otimes \mathcal {D}\)-measurable for every \(x\in \mathbb {X}\) and every k.Footnote 6
The set \(\mathbb {U}\) is called the set of profile statistics.
If \(\Delta {:}\,\mathbb {T}\rightarrow \Sigma \) represents choices of profiles at various time instants and X is a trajectory of the system, then we denote by \(U(\Delta ,X)\) the function \(u{:}\,\mathbb {T}\rightarrow \mathbb {U}\) such that \(u(t)=U(\Delta (t),X(t))\). The set of all such functions is denoted by \(\mathfrak {U}\).
Given a function \(u{:}\,\mathbb {T}\rightarrow \mathbb {U}\), representing the statistics of profiles chosen at various time instants, the system evolves according to the equation \(X(t+1)=\phi (X(t),u(t))\), with the initial condition \(X(t_{0})=\bar{x}\). We call such a trajectory corresponding to u and denote it by \(X^{u}\). If \(u=U(\Delta ,X^{u})\), where \(\Delta {:}\,\mathbb {T}\rightarrow \Sigma \) represents a choice of static profiles at various time instants, then, by a slight abuse of notation, we denote the trajectory corresponding to u by \(X^{\Delta }\) and call it corresponding to \(\Delta \) and instead of \(U(\Delta ,X^{\Delta })\), we write \(U(\Delta )\)—the statistic of \(\Delta \).
At each time instant during the game, players get instantaneous payoffs. The instantaneous payoff of player i is a function \(P_{i}{:}\,\mathbb {D}\times \mathbb {U}\times \mathbb {X}\rightarrow \mathbb {R\cup \{-\infty \}}\).
Besides, in the case of finite time horizon, players get also terminal payoffs (after termination of the game), defined by functions \(G_{i}{:}\,\mathbb {X}\rightarrow \mathbb {R\cup \{-\infty \}}\). For uniformity of notation, we take \(G_{i}\equiv 0\) in the case of infinite time horizon.
Players observe some histories of the game, but not the whole profiles. At time t they observe the states X(s) for \(s\le t\) and the statistics u(s) of static profiles for time instants \(s<t\). Therefore the set \(\mathbb {H}_{t}\) of observed histories at time t equals \(\mathbb {X}^{t-t_{0}+1}\mathbb {\times }\mathbb {U}^{t-t_{0}}\). To simplify further notation, we introduce the set of all, possibly infinite, histories of the game \(\mathbb {H}_{\infty }=\mathbb {X} ^{T-t_{0}+2}\mathbb {\times }\mathbb {U}^{T-t_{0}+1}\). For such a history \(H\in \mathbb {H}_{\infty }\) we denote by \(H|_{t}\) the actual history observed at time t.
Given an observed history \(H_{t}\in \mathbb {H}_{t}\), players formulate their suppositions about future values of u and X, depending on their decision a made at time t.
This is formalized as a multivalued correspondence of player's i belief \(B_{i}{:}\,\mathbb {T}\times \mathbb {D}\times \mathbb {H}_{\infty }\multimap \mathbb {H}_{\infty }\) with nonempty values.
Obviously, we assume that beliefs \(B_{i}(t,a,H)\) are identical for histories H with identical observed history \(H|_{t} \) and that for all \(H^{\prime }\in B_{i}(t,a,H)\) we have \(H^{\prime }|_{t}=H|_{t}\). Note also that in fact, the value of u(t) is redundant in the definition of beliefs at time t, since as input we need u(s) for \(s<t\) only, while as output we are interested only in u(s) for \(s>t\). Therefore, for simplicity of further definitions of self-verification, we take the largest set of possible values of u(t)—\(\mathbb {U}\). Those assumptions are only consequences of using simplifying notational convention, which allows to code both observed histories and beliefs in one element of \(\mathbb {H}_{\infty }\).
Besides the above assumptions, we do not impose a priori any other constraints, like an equivalent of Bayesian updating.
This means that we consider a wide class of belief correspondences as inputs to our model. We shall return to the question how players update beliefs and whether some beliefs cannot be ex post regarded as justified in Sect. 4.3, when we introduce the self-verification of beliefs.
The next stage is the definition of players' strategies in the games—both the actual dynamic game and the game with distorted information. We consider very compound closed loop strategies—dependent on time instant, state and the actual history of the game at this time instant. Formally, a (dynamic) strategy of player i is a function \(S_{i}{:}\,\mathbb {T}\times \mathbb {X}\times \mathbb {H}_{\infty }\rightarrow \mathbb {D}\) such that for each time t, state x and history H, we have \(S_{i}(t,x,H)\in D_{i}(t,x)\) and dependence of \(S_i(t,x,H)\) on H is restricted to the actual history observed at time t, i.e. \(H|_t\).
Since players' beliefs in the game with distorted information depend on observed histories of the game, such a definition may encompass also dependence on beliefs.
Such choices of players' strategies constitute a function \(S{:}\,\mathbb {I}\times \mathbb {T}\times \mathbb {X}\times \mathbb {H}_{\infty }\rightarrow \mathbb {D}\).Footnote 7 The set of all strategies of player i is denoted by \(\mathfrak {S}_{i}\).
For simplicity of further notation, for a choice of strategies \(S=\{S_{i}\}_{i\in \mathbb {I}}\), we consider the open loop form of it, \(S^{OL}{:}\,\mathbb {T}\rightarrow \Sigma \).
It is defined by \(S_{i}^{OL}(t)=S_{i}(t,X(t),H^S) \) for \(H^S\) denoting the history of the game resulting from choosing S.
Open loop form is well defined, whenever the history is well defined.Footnote 8
Therefore we restrict the notion (dynamic) profile (of players' strategies) to choices of strategies such that for all t, the function \(S_{\cdot }^{OL}(t)\) is a static profile available at \(t\in \mathbb {T}\) and \(X^S(t)\) (\(=X^{S^{OL}}(t))\).
The set of all dynamic profiles is denoted by \(\mathbf {\Sigma }\).
If the players choose a dynamic profile S, then the actual payoff of player i, \(\Pi _{i}{:}\,\mathbf {\Sigma }\rightarrow \overline{\mathbb {R}}\), can be written in a way dependent only on the open loop form of the profile:
$$\begin{aligned} \Pi _{i}(S)= & {} \sum _{t=t_{0}}^{T}P_{i}\left( S_{i}^{OL}(t),U\left( S^{OL}(t)\right) ,X^{S}(t)\right) \cdot \left( \frac{1}{1+r_{i}} \right) ^{t-t_{0}}\\&\quad +\,G_{i}\left( X^{S}(T+1)\right) \cdot \left( \frac{1}{1+r_{i}}\right) ^{T+1-t_{0}}, \end{aligned}$$
where \(r_{i}>0\) is a discount rate of player i.
Since we assumed that the instantaneous and terminal payoffs can have infinite values, we add an assumption that for all S the value \(\Pi _{i}(S) \) is well defined—which holds if e.g. \(P_{i}\) are bounded from above or for every profile S, the functions \(t \mapsto P_{i}\left( S_{i}^{OL}(t),U\left( S^{OL}(t)\right) ,X^{S}(t)\right) \) are bounded from above by a polynomial function whenever \(G_{i}\left( X^{S}(T+1)\right) \) can attain \(-\infty \).
However, players do not know the whole profile, therefore, instead of the actual payoff at each future time instant, they can use in their calculations the anticipated payoff functions, \(\Pi _{i}^{e}{:}\,\mathbb {T}\times \mathbf {\Sigma }\rightarrow \overline{\mathbb {R}}\), corresponding to their beliefs at the corresponding time instants.
This function for player i is defined by
$$\begin{aligned} \Pi _{i}^{e}(t,S)=P_{i}\left( S_{i}^{OL}(t),U\left( S^{OL}(t)\right) ,X^{S}(t)\right) +V_{i}(t+1,B_{i}(t,S_{i}^{OL}(t),H^{S}))\cdot \frac{1}{1+r_{i}}, \end{aligned}$$
where \(V_{i}{:}\,\overline{\mathbb {T}}\times \left( \mathfrak {P}(\mathbb {H}_{\infty })\backslash \emptyset \right) )\rightarrow \overline{\mathbb {R}}\), (the function of guaranteed anticipated value or guaranteed anticipated payoff) represents the present value of the minimal future payoff given his belief correspondence and assuming player i chooses optimally in the future.
Formally, for time t and belief set \(\mathbb {B}\in \mathfrak {P}(\mathbb {H}_{\infty })\backslash \emptyset \), we define
$$\begin{aligned} V_{i}(t,\mathbb {B})=\inf _{H\in \mathbb {B}}v_{i}(t,H), \end{aligned}$$
where the function \(v_{i}{:}\,\overline{\mathbb {T}}\mathbb {\times H}_{\infty }\rightarrow \overline{\mathbb {R}}\) is the present value of the future payoff of player i along a history under the assumption that he/she chooses optimally in the future:
$$\begin{aligned} v_{i}(t,(X,u))=\sup _{d{:}\,\mathbb {T}\rightarrow \mathbb {D}\ d(s)\in D(s,X(s)) \text { for } s\ge t}\,\, \sum _{s=t}^{T}\frac{P_{i}(s,d(s),u(s),X(s))}{(1+r_{i})^{s-t}}+\frac{G_i\left( X(T+1)\right) }{(1+r_i)^{T+1-t}}. \end{aligned}$$
Note that such a definition of anticipated payoff mimics the Bellman equation for calculating best responses of players' to the strategies of the others. For various versions of this equation see e.g. Bellman (1957), Blackwell (1965) or and Stokey et al. (1989).
It is also worth emphasizing that the construction of the function \(V_i\) is analogous to the approach proposed by Gilboa and Schmeidler (1989).Footnote 9
This completes the definition of the game with distorted information \({\mathfrak {G}}^{\text {dist}}\).
We can also introduce the symbol \({\mathfrak {G}}^{\text {dist}}_{t,H}\) denoting the game with the set of players \(\mathbb {I}\), the sets of their strategies \(D_{i}(t,X(t))\) and the payoff functions \(\bar{\Pi }_{i}^{e}(t,H,\cdot ){:}\,\Sigma \rightarrow \overline{\mathbb {R}}\) defined by \(\bar{\Pi }_{i}^{e}(t,H,\delta )=\Pi _{i}^{e}(t,S)\) for a profile S such that \(S(t)=\delta \) and \(H^{S}|_{t}=H|_{t}\) (note that the dependence of \(\Pi _{i}^{e}(t,\cdot )\) on the profile is restricted to its static profile at time t only, and the history observed at time t, therefore the definition does not depend on the choice of specific S).
We call these \({\mathfrak {G}}^{\text {dist}}_{t,H}\) subgames with distorted information.
Nash equilibria, belief distorted Nash equilibria and subjective Nash equilibria
One of the basic concepts in game theory, Nash equilibrium, assumes that every player, or almost every in the case of large games with a measure space of players, chooses a strategy which maximizes his/her payoff given the strategies of the remaining players.
Notational convention In order to simplify notation, we need the following abbreviation: for a profile S and a strategy d of player i, the symbol \(S^{i,d}\) denotes the profile such that \(S_{i}^{i,d}=d\) and \(S_{j}^{i,d}=S_{j}\) for \(j\ne i\).
Definition 1
A profile S is a Nash equilibrium if for a.e. \(i\in \mathbb {I}\) and for every strategy d of player i, we have \(\Pi _{i}(S)\ge \Pi _{i}(S^{i,d})\).
Pre-BDNE: towards BDNE
The assumption that a player knows the strategies of the remaining players or at least the future statistics of these strategies which influence his payoff is usually not fulfilled in many real life situations. Moreover, even details of the other players' payoff functions or available strategy sets are sometimes not known precisely, while other players' information at a specific situation is usually unknown. Therefore, given their beliefs, players can only maximize their anticipated payoffs.
A profile S is a pre-belief distorted Nash equilibrium (pre-BDNE for short), if for a.e. \(i\in \mathbb {I}\), every strategy d of player i and every \(t\in \mathbb {T}\), we have \(\Pi _{i}^{e}(t,S)\ge \Pi _{i}^{e}(t,S^{i,d})\).
With notation introduced in Sect. 3, a profile S is a pre-BDNE in \({\mathfrak {G}}^{\text {dist}}\) if at each time t, the static profile \(S^{OL}(t)\) is a Nash equilibrium in \({\mathfrak {G}}^{\text {dist}}_{t,H^{S}}\).
This formulation reveals that, compared to looking for a Nash equilibrium in a dynamic game, finding a pre-BDNE, given beliefs, is simpler.Footnote 10
First, we have an obvious equivalence with Nash equilibria, since distortion of beliefs concerns only future.
In a one shot game i.e. for \(T=t_{0}\) and \(G\equiv 0\) a profile is a pre-BDNE if and only if it is a Nash equilibrium.
Now it is time to state an existence result for games with a nonatomic space of players.
Theorem 3
Let \((\mathbb {I},\mathfrak {I},\lambda )\) be a nonatomic measure space and let \(\mathbb {D}= \mathbb {R}^{n}\) with the \(\sigma \)-field of Borel subsets.
Assume that for every t, x, H and for almost every i, the following continuity assumptions hold: the sets \(D_{i}(t,x)\) are compact, the functions \(P_{i}(a,u,x)\) and \(V_{i}(t,B_{i}(t,a,H))\) are upper semicontinuous in (a, u) jointlyFootnote 11 while for every a, they are continuous in u, and for all k, the functions \(g_{k}(i,a,x)\) are continuous in a for \(a\in D_{i}(t,x)\).
Assume also that for every t, x, u and H, the following measurability assumptions hold: the graph of \(D_{\cdot }(t,x)\) is measurable and the following functions defined on \(\mathbb {I}\times \mathbb {D}\) are measurable \((i,a)\mapsto P_{i}(a,u,x)\), \(r_{i}\), \(V_{i}(t,B_{i}(t,a,H))\) and \(g_{k}(i,a,x)\) for every k.
Moreover, assume that for every k and x, there exists an integrable function \(\Gamma {:}\,\mathbb {I\rightarrow R}\) such that for every \(a\in D_{i}(t,x) \left| g_{k}(i,a,x)\right| \le \Gamma (i)\).
Under these assumptions there exists a pre-BDNE for B.
It is a conclusion from one of theorems on the existence of pure strategy Nash equilibria in games with a continuum of players: Wiszniewska-Matyszkiel (2000) Theorem 3.1 or Balder (1995) Theorem 3.4.1, applied to the sequence of games \({\mathfrak {G}}^{\text {dist}}_{t,H}\) for any history H such that \(H|_t\) is the actual history of the game observed at time t. \(\square \)
Theorem 3 is analogous to the existence results of pure strategy Nash equilibria in games with a continuum of players, which hold under quite weak assumptions.
Now we are going to show some properties of pre-BDNE for a special kind of belief correspondence.
A belief correspondence \(B_{i}\) of player i is the perfect foresight at a profile S if for all t, it fulfils \(B_{i}(t,S_{i}^{OL}(t),H^{S})=\{H^{S}\}\).
Let \((\mathbb {I},\mathfrak {I},\lambda )\) be a nonatomic measure space and let player's payoffs be bounded for a.e. player.
Consider a profile \(\bar{S}\) with statistic of its open loop form u and the corresponding trajectory X.
(a) Let \(\bar{S}\) be a Nash equilibrium profile.
If B is the perfect foresight at a profile \(\bar{S}\) and at the profiles \(\bar{S}^{i,d}\) for a.e. i and every strategy d of player i, then for all t and a.e. i, \(\bar{S} _{i}^{OL}(t)\in \mathop {\mathrm{{Argmax}}}_{a\in D_{i}(t,X(t))}\bar{\Pi }_{i}^{e}(t,H^{\bar{S}},(\bar{S}^{OL}(t))^{i,a})\) Footnote 12 and \(\bar{S}_{i}^{OL}|\{t+1,\ldots ,T\}\) is consistent with the results of player's i optimizations used in the definition of \(v_{i}\) at consecutive time instants.Footnote 13
(b) If \(\bar{S}\) is a Nash equilibrium, then it is a pre-BDNE for a belief correspondence being the perfect foresight at \(\bar{S}\) and at all profiles \(\bar{S}^{i,d}\).
(c) Let \(\bar{S}\) be a pre-BDNE at a belief B. If B is the perfect foresight at a profile \(\bar{S}\) and at the profiles \(\bar{S}^{i,d}\) for a.e. i and every strategy d of player i, then for a.e. i, choices of player i are consistent with the results of his/her optimization used in definition of \(v_{i}\) at consecutive time instants.
(d) If a profile \(\bar{S}\) is a pre-BDNE for a belief B being the perfect foresight at this \(\bar{S}\) and \(\bar{S}^{i,d}\) for a.e. player i and every strategy d of player i, then it is a Nash equilibrium.
For transparency of exposition, the proof is in Appendix 1.
Theorem 4 says that in games with a continuum of players, pre-BDNE for the perfect foresight beliefs are equivalent to Nash equilibria. Moreover, we have time consistency: solving the optimization problem while stating what is his/her optimal guaranteed value of future payoff at any stage, player calculates his/her strategy which is actually played at the pre-BDNE.
A Nash equilibrium profile \(\bar{S}\) may be also represented as a pre-BDNE for a belief which is not the perfect foresight at \(\bar{S}\) or \(\bar{S}^{i,d}\).
Now we formulate an equivalence theorem for repeated games.
Let \({\mathfrak {G}}^{\text {dist}}\) be a repeated game with a belief correspondence not dependent on players' own strategies, in which payoffs and anticipated payoffs are bounded for a.e. player.
If \((\mathbb {I},\mathfrak {I},\lambda )\) is a nonatomic measure space, then a profile S is a pre-BDNE if and only if it is a Nash equilibrium.
Every profile S with strategies of a.e. player being independent of histories is a pre-BDNE if and only if it is a Nash equilibrium.
Proof of this theorem is also in Appendix 1.
The final concept of BDNE
It seems obvious that definition of equilibrium should combine the equilibrating and payoff maximization conditions defining pre-BDNE with some kind of self-verification of beliefs—the concept of pre-BDNE lacks some condition that our observation along the profile does not contradict our beliefs.
The most obvious way of measuring self-verification of beliefs is by accuracy of prediction of observable variables. Sometimes, in evolutionary game theory, players look only on payoffs and if they obtain payoffs as assumed, they do not have any incentive to change their choices and beliefs.
In this paper we consider the former measure: by accuracy of prediction of observable variables—statistic and state.
The main reason is that in reality, even if a player obtains payoffs as he/she assumed, the fact that he/she observes reality which was previously regarded as impossible, is an incentive to change beliefs.
With this idea of self-verification we can end up with the definition of BDNE.
A profile \(\bar{S}\) is a belief-distorted Nash equilibrium (BDNE) for a collection of beliefs \(B=\{B_{i}\}_{i\in \mathbb {I}}\) if \(\bar{S}\) is a pre-BDNE for B and for a.e. i and every t, \(H^{\bar{S}} \in B_{i}(t,\bar{S}_{i}^{OL}(t),H^{\bar{S}})\).
A profile \(\bar{S}\) is a belief-distorted Nash equilibrium (BDNE) if there exists a collection of beliefs \(B=\{B_{i}\}_{i\in \mathbb {I}}\) such that \(\bar{S}\) is a BDNE for B.
We can state the following equivalence results being immediate consequences of the corresponding equivalence results for pre-BDNE.
Theorems 4 and 5 and Remark 1 remain true if we replace pre-BDNE by BDNE.
In remark 1 there is nothing to prove.
Equivalence in Theorems 4 and 5 is also immediate—if we take the perfect foresight beliefs, then a profile S is a pre-BDNE for perfect foresight if and only if it is a BDNE for perfect foresight, since for perfect foresight the actual history is in the belief set. \(\square \)
Corollary 7
In games with a continuum of players with payoffs of a.e. player bounded, every Nash equilibrium is a BDNE.
This implies existence of a BDNE in games with a continuum of players in every case whenever there exists a Nash equilibrium. Existence results and properties of equilibria for such games are proven, among others, in Wiszniewska-Matyszkiel (2002) for open loop information structure. However, by Wiszniewska-Matyszkiel (2014a), in discrete time dynamic games with a continuum of players the existence of an open loop Nash equilibrium is equivalent to existence of a closed loop Nash equilibrium.
Obviously, we are interested in the existence of BDNE which are not Nash equilibria. An obvious question is whether it is always possible to construct beliefs, with some minimal properties, such that a game without pure strategy Nash equilibria has a BDNE. The answer is negative and a counterexample is stated in Proposition 9.
After defining the concept and examining its properties, we compare BDNE with subjective Nash equilibrium—to the best of the author's knowledge the only solution concept which can deal with seriously distorted information, like our concept of BDNE.
Since in our approach we consider only pure strategy profiles, we have to restrict only to pure strategy subjective equilibria.
First of all, subjective equilibria are used only in the environment of repeated games, or one stage games that potentially can be repeated. Therefore, the concept of BDNE can be used in a wider environment than subjective equilibria.
If we compare both kinds of equilibria in the same environment of repeated games, without state variables, there are another differences.
First, note that the concept of "consequences" of actions, used in the definition of subjective equilibria, corresponds to our statistic function and beliefs about its values.
The first difference is that they have the form of probability distribution, while here it is the set of histories regarded as possible, which are deterministic.Footnote 14
We still can compare these concepts in the case when both expectations are trivial: a probability distribution concentrated at a point in the subjective equilibrium approach and a singleton as the set of possible histories in our approach. In this case, we can see the second main difference.
In the subjective equilibrium approach, beliefs, coded by environmental response function, describing how the environment is going to react to our decision at the considered time instant, concern only that time instant.
Moreover, decisions at each time instant are taken without foreseeing the future, and beliefs (as in our paper based on histories) apply to the present stage of the game only. So at each stage players just optimize given their beliefs about behaviour of the statistic of the profile at this stage and do not think about future consequences of their moves.
So, in the subjective equilibrium approach, subsequent stage games are separated in the sense that a player does not take into account the fact that results of his/her current decision can influence future behaviour of the other players, unlike in most repeated games. Consequently, there is no risk of experimenting in order to discover the structure of the game other than loss of current payoff. The case of repeated Prisoner's dilemma—Example 4 shows how erroneous such an approach can be.
In Example 3, we present a situation when direct comparison between subjective equilibria and pre-BDNE is possible (the case in which beliefs are concentrated at one point) first considered in Kalai and Lehrer (1995). In this case there are profiles of strategies that constitute pure strategy subjective equilibria and they are not even pre-BDNE.
Example 4 illustrates the opposite situation: there are BDNE which cannot constitute subjective equilibria without assuming beliefs which are totally against the logic of the game, since we have simultaneous moves in stage games. A pair of cooperative strategies can be a subjective equilibrium in the repeated game only if each of the players assumes that the opponent is going to punish his/her choice of defective strategy immediately, at the same stage of the game.
Besides, in the subjective equilibrium theory, there is a condition that beliefs are not contradicted by observations, i.e. that frequencies of various results correspond to the assumed probability distributions.
Similarly, in the definition of BDNE we introduce a criterion of self-verification saying that beliefs are not contradicted by observations—here in the sense that the actually observed history is always in the set of beliefs.
Self-verification of beliefs
Besides the concept of BDNE, we can also consider separately self-verification as a property of beliefs.
Assume that players, given some beliefs, choose a profile being a pre-BDNE for them.
We can consider potential and perfect self-verification—stating, correspondingly, that "it is possible that beliefs will never be falsified", or "for sure, beliefs will never be falsified".
A collection of beliefs \(B=\{B_{i}\}_{i\in \mathbb {I}}\) is perfectly self-verifying if there exists a BDNE for B and for every pre-BDNE \(\bar{S}\) for B, for a.e. \(i\in \mathbb {I}\) and a.e. t, we have \(H^{\bar{S}}\in B_{i}(t,\bar{S}_{i}^{OL}(t),H^{\bar{S}})\).
A collection of beliefs \(B=\{B_{i}\}_{i\in \mathbb {I}}\) is potentially self-verifying if there exists a pre-BDNE \(\bar{S}\) for B such that for a.e. \(i\in \mathbb {I}\) and a.e. t, we have \(H^{\bar{S}}\in B_{i}(t,\bar{S}_{i}^{OL}(t),H^{\bar{S}})\).
A collection of beliefs \(\{B_{i}\}_{i\in \mathbb {J}}\) of a set of players \(\mathbb {J}\) is perfectly self-verifying against beliefs of the other players \(\{B_{i}\}_{i\in \backslash \mathbb {J}}\) if there exists a BDNE for \(\{B_{i}\}_{i\in \mathbb {I}}\) and for every pre-BDNE \(\bar{S}\) for \(\{B_{i}\}_{i\in \mathbb {I}}\) for a.e. \(i\in \mathbb {J}\) and a.e. t, we have \(H^{\bar{S} }\in B_{i}(t,\bar{S}_{i}^{OL}(t),H^{\bar{S}})\).
A collection of beliefs \(\{B_{i}\}_{i\in \mathbb {J}}\) of a set of players \(\mathbb {J}\) is potentially self-verifying against beliefs of the other players \(\{B_{i}\}_{i\in \backslash \mathbb {J}}\) if there exists a pre-BDNE \(\bar{S}\) for \(\{B_{i}\}_{i\in \mathbb {I}}\) such that for a.e. \(i\in \mathbb {J}\) and a.e. t, we have \(H^{\bar{S}}\in B_{i}(t,\bar{S}_{i}^{OL}(t),H^{\bar{S}})\).
Every pre-BDNE for perfectly self-verifying beliefs is a BDNE.
If there exists a BDNE for B, then B are potentially self-verifying.
After introducing these concepts, we can return to the discussion about properties of beliefs.
Beliefs which are not even potentially self-verifying, cannot be regarded as rational in any situation.
On the other hand, beliefs which are perfectly self-verifying will never be falsified if players optimize according to them.
Now we illustrate the notions of pre-BDNE, self-verification and BDNE by the examples mentioned in Sect. 1.1. We also compare them with Nash equilibria and, if possible, with subjective equilibria.
In those examples we start by calculating pre-BDNE for various forms of beliefs with special focus on those which are inconsistent with the "common sense" implied by the objective dynamic game structure. Afterwards we check self-verification of those beliefs and check whether the pre-BDNE found are BDNE.
Repeated El Farol bar problem with many players or public goods with congestion
Here we present a modification of the model first stated by Brian Arthur (1994) as the El Farol bar problem, allowing to analyse both continuum and finitely many players cases.
There are players who choose each time whether to stay at home—represented by 0—or to go to the bar—represented by 1. If the bar is overcrowded, then it is better to stay at home, the less it is crowded the better it is to go.
The same model can describe also problems of so called local public goods, where the congestion decreases the utility of consuming it.
We consider the space of players being either the unit interval with the Lebesgue measure or the set \( \{ 1,\ldots ,n \} \) with the normalized counting measure. The game is repeated, therefore dependence on state variables is trivial, so we skip them in the notation.
The statistic of a static profile \(\delta \) is \(U(\delta )=\int _{\mathbb {I}}\delta (i)d\lambda (i)\).
In our model, we reflect results of congestion by instantaneous payoff function \(P_{i}(0,u)=0\) and \(P_{i}(1,u)=\frac{1}{2}-u\).
To make the payoffs finite, we assume that either T is finite or \(r_i>0\). Besides, we take \(G \equiv 0\).
Consider the continuum of players case and beliefs independent of player's own choice a.
The set of BDNE coincides with the set of pure strategy Nash equilibria and the set of pure strategy subjective equilibria.
Moreover, at every pre-BDNE, for all t, \(u(t)=\frac{1}{2}\).
The last fact is obvious, the first one is a consequence of Theorem 5, so the only fact that remains to be proved is that at every subjective equilibrium \(u(t)=\frac{1}{2}\).
The environmental response functions assigns a probability distribution describing player's beliefs about u(t). All players who believe that the event \(u(t)>\frac{1}{2}\) has greater probability than \(u(t)<\frac{1}{2}\), choose 0, while those who believe in the opposite inequality choose 1, the remaining players may choose any of two strategies. If the number of players choosing 0 is greater than those choosing 1, then the interval \(u(t)<\frac{1}{2}\) happens more frequently than \(u(t)>\frac{1}{2}\), which contradicts the beliefs of the two groups of players who choose 0. \(\square \)
Consider the n-player game and players' beliefs independent of player's own action.
Let n be odd.
The set of BDNE coincides with the set of pure strategy Nash equilibria and the set of pure strategy subjective equilibria and they are empty.
Let n be even.
The set of Nash equilibria coincides with the set of pure strategy subjective Nash equilibria. At every equilibrium, for all t, \(u(t)=\frac{1}{2}\).
Obtaining \(u(t)=\frac{1}{2}\) in (a) is impossible for players using pure strategies. The rest of the proof is analogous to the proof of Proposition 8. \(\square \)
Repeated Cournot oligopoly or competitive market
We consider a model of a market with a strictly decreasing inverse demand function \(p:\mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\) and a strictly increasing and convex cost functions of players \(c_{i}{:}\,\mathbb {R}_{+}\rightarrow \mathbb {R}_{+}\).
The set of players—producers of the same product—is either finite with the normalized counting measure in the Cournot oligopoly case, or the interval [0, 1] with the Lebesgue measure in the competitive market case.
The statistic of a static profile \(\delta \) is \(U(\delta )= \int _{\mathbb {I} }\delta (i)d\lambda (i)\).
Therefore, the instantaneous payoff is \(P_{i}(a,u)=p(u)\cdot a-c_{i}(a)\).
The discount factors of all players are identical with \(r_{i}\) equal to the market interest rate \(r>0\).
Kalai and Lehrer consider a similar example with n identical players and linear cost functions in Kalai and Lehrer (1995) and they prove that, besides the Nash equilibrium of the model, i.e. the Cournot equilibrium, there is another subjective equilibrium, in which the competitive equilibrium production level and price are obtained.
It can be easily proved that the opposite situation also takes place: in the continuum-of-players model, in which the competitive equilibrium is the only (up to measure equivalence) Nash equilibrium, the Cournot equilibrium behaviour for n-player oligopoly, constitutes a subjective equilibrium. Indeed, it is enough to make every player believe that the price reacts to his production level as in the n-player oligopoly case.
This is usually not the case when we consider BDNE, since the distortion of information concerns only future.
If every player believes that his/her current decision does not influence future prices, then the only BDNE in the n player model is the Cournot-Nash equilibrium while in the continuum of players game—the competitive equilibrium.
An immediate consequence of Theorem 5—those profiles are the only Nash equilibria in the corresponding one shot games. \(\square \)
Repeated Prisoner's dilemma
We consider infinite time horizon in which a standard Prisoner's Dilemma game is repeated infinitely many times.
There are two players who have two strategies in each stage game—cooperate coded as 1 and defect coded as 0.
The decisions are made simultaneously, therefore a player does not know the decision of his/her opponent.
If both players cooperate, they get payoffs C, if they both defect, they get payoffs N. If only one of the players cooperates, then he/she gets payoff A, while the defecting one gets R. These payoffs are ranked as follows: \(A<N<C<R\).
Obviously, the strictly dominant pair of defecting strategies (0, 0) is the only Nash equilibrium in one stage games, while a sequence of such decisions constitute a Nash equilibrium also in the repeated game, as well as a BDNE. We check whether a pair of cooperative strategies can also constitute a BDNE.
Let us consider beliefs of the form "if I defect now, the other player will defect in future for ever, while if I cooperate now, the other player will cooperate for ever", i.e. "the other player plays grim trigger strategy".
The pair of grim trigger strategies "cooperate to the first defection of the other player, then defect for ever", denoted by GT, constitute a Nash equilibrium if the discount rates are not to large.
However, this does not hold for a pair of simple strategies "cooperate for ever", denoted by CE.
We construct very simple beliefs which allow such a pair of strategies to be a BDNE. For clarity of exposition, we formulate them in "Repeated Prisoner's Dilemma" section in Appendix 1 and prove that such a pair of strategies is a BDNE (Proposition 13).
Note that a pair of cooperative strategies as subjective equilibrium can be obtained only when we take the environmental response functions of both players describing the decision of the other player at the same stage. This means we at least have to assume that each player believes that his/her opponent knows his/her decision before making decision and immediately punishes for defection. This may happen only if we consider the situation of first mover in not simultaneous, but sequential Prisoner's Dilemma game, which contradicts the basic assumption about the game.
Where \(\ln 0\) is understood as \(-\infty \).
Which means that \(\exists \bar{t}\quad \forall t>\bar{t}\quad X(t)=0\).
In the case of the ozone hole problem, there it was a positive ecological campaign in media making many consumers of deodorants containing freons believe that their potential impact is such that their decision is not negligible.
In the case of the greenhouse effect problem we can observe an opposite campaign—there are many voices saying that there is no global warming, some even saying that scientists writing about it consciously lie.
It is a very rare situation, possible to obtain only because of \(- \infty \) as the guaranteed future payoff, independently of player's decision.
This holds always in the case of finitely many players, while for infinitely many players non-emptiness can be proved using measurability or analyticity of the graph of \(D(\cdot ,t,x)\) together with some properties of the strategy space and measure \(\lambda \)—see e.g. Wiszniewska-Matyszkiel (2000); it becomes trivial whenever all the sets \(D_{i}(t,x)\) have a nonempty intersection.
In the ozone hole example from the introduction, the statistic may represent the aggregate emission of fluorocarbonates, which can be expressed by one dimensional statistic with g returning player's individual emission i.e. \(g(i,d,x)=d\). The same statistic function is used in our clarifying example.
Taking such a form of statistic function does not have to be restrictive in games with finitely many players—in that case statistic may be the whole profile, as it is presented in Example 4.
In fact, the second argument of strategy of a player (and, consequently, the third argument of S) is redundant—the state at a current time instant is a part of history. However, we include x implicitly in order to simplify the notation.
Note that the statistic is defined only for measurable selections from players' strategy sets, therefore the statistics at time t is well defined whenever the function \(S_{\cdot }^{OL}(t-1)\) is measurable. Obviously, for games with finitely many players, this is always fulfilled.
See discussion in "The form of beliefs and anticipated payoffs considered in this paper" section in Appendix 3.
Finding a Nash equilibrium in a dynamic game requires solving a set of dynamic optimization problems coupled by finding a fixed point in a functional space.
Finding a pre-BDNE requires only finding a sequence of static Nash equilibria—each of them requires solving a set of simple static optimization problems coupled only by the value of u. Of course, some preliminary work has to be done to calculate anticipated payoffs, but in the case considered in this paper, it is again a sequence of static optimizations.
This formulation imposes conditions on derivative objects, like V, instead of conditions on primary objects, which may seem awkward. This is caused by the fact that, especially in infinite horizon problems, it is difficult to obtain any kind of regularity of \(V_i\)—a result of both dynamic and static optimization—given even very strong regularity assumptions on primary objects. Similarly, such an approach appears also in e.g. dynamic optimization in formulation of Bellman equation for continuous time, in which regularity conditions are imposed on the value function.
Where the symbol \(\delta ^{i,a}\), analogously to \(S^{i,d}\), for a static profile \(\delta \) denotes the profile \(\delta \) with strategy of player i changed to a.
I.e., for every t, \(\bar{S}_{i}^{OL}\in \mathop {\mathrm{{Argmax}}}_{d:\mathbb {T}\rightarrow \mathbb {D}\ d(s)\in D_{i}(s,X(s))\text { for }s\ge t}\sum _{s=t}^{T}\frac{P_{i}(s,d(s),u(s),X(s))}{(1+r_i) ^{s-t}}+\frac{G_{i}\left( X(T+1)\right) }{(1+r_i)^{T+1-t}}\).
This assumption is relaxed in Wiszniewska-Matyszkiel (2015), where probabilistic beliefs are considered.
Non typically, the value function of player i's optimization is not directly dependent on state variable argument x since the trajectory X is fixed for the decision making problem of player i, so we skip this argument and consider \(\widetilde{V}_{i}\) dependent on time only.
Aumann, R. J. (1964). Markets with a continuum of traders. Econometrica, 32, 39–50.
Aumann, R. J. (1966). Existence of competitive equilibrium in markets with continuum of traders. Econometrica, 34, 1–17.
Aumann, R. J. (1974). Subjectivity and correlation in randomized strategies. Journal of Mathematical Economics, 1, 67–96.
Aumann, R. J. (1987). Correlated equilibrium as an expression of bounded rationality. Econometrica, 55, 1–19.
Azrieli, Y. (2009a). Categorizing others in large games. Games and Economic Behavior, 67, 351–362.
Azrieli, Y. (2009b). On pure conjectural equilibrium with non-manipulable information. International Journal of Game Theory, 38, 209–219.
Balder, E. (1995). A unifying approach to existence of Nash equilibria. International Journal of Game Theory, 24, 79–94.
Battigalli, P. (1987). Comportamento razionale ed equilibrio nei giochi e nelle situazioni sociali. Unpublished thesis, Universita Bocconi.
Battigalli, P., Cerreia-Vioglio, S., Maccheroni, F., & Marinaci, M. (2012). Selfconfirming equilibrium and model uncertainty, Working Paper 428, Innocenzo Gasparini Institute for Economic Research.
Battigalli, P., & Guaitoli, D. (1997). Conjectural equilibria and rationalizability in a game with incomplete information. In P. Battigali, A. Montesano, & F. Panunzi (Eds.), Decisions, games and markets (pp. 97–124). Dordrecht: Kluwer.
Battigalli, P., & Siniscalchi, M. (2003). Rationalization and incomplete information. Advances in Theoretical Economics, 3(1).
Bellman, R. (1957). Dynamic programming. Princeton: Princeton University Press.
Blackwell, D. (1965). Discounted dynamic programming. Annals of Mathematical Statistics, 36, 226–235.
Brian Arthur, W. (1994). Inductive reasoning and bounded rationality. American Economic Review, 84, 406–411.
Cartwright, E., & Wooders, M. (2012). Correlated equilibrium, conformity and stereotyping in social groups. The Becker Friedman Institute for Research in Economics, Working paper 2012-014.
Ellsberg, D. (1961). Risk, ambiguity and the savage axioms. Quarterly Journal of Economics, 75, 643–669.
Eyster, E., & Rabin, M. (2005). Cursed equilibrium. Econometrica, 73, 1623–1672.
Fudenberg, D., & Levine, D. K. (1993). Self-confirming equilibrium. Econometrica, 61, 523–545.
Gilboa, I., Maccheroni, F., Marinacci, M., & Schmeidler, D. (2010). Objective and subjective rationality in a multiple prior model. Econometrica, 78, 755–770.
Gilboa, I., & Marinacci, M. (2011). Ambiguity and the Bayesian paradigm, Working Paper 379, Universita Bocconi.
Gilboa, I., & Schmeidler, D. (1989). Maxmin expected utility with non-unique prior. Journal of Mathematical Economics, 18, 141–153.
Harsanyi, J. C. (1967). Games with incomplete information played by bayesian players, part I. Management Science, 14, 159–182.
Hennlock, M. (2008). A robust feedback Nash equilibrium in a climate change policy game. In S. K. Neogy, R. B. Bapat, A. K. Das, & T. Parthasarathy (Eds.), Mathematical programming and game theory for decision making (pp. 305–326).
Kalai, E., & Lehrer, E. (1993). Subjective equilibrium in repeated games. Econometrica, 61, 1231–1240.
Kalai, E., & Lehrer, E. (1995). Subjective games and equilibria. Games and Economic Behavior, 8, 123–163.
Klibanoff, P., Marinacci, M., & Mukerji, S. (2005). A smooth model of decision making under ambiguity. Econometrica, 73, 1849–1892.
Lasry, J.-M., & Lions, P.-L. (2007). Mean field games. Japanese Journal of Mathematics, 2, 229–260.
Marinacci, M. (2000). Ambiguous games. Games and Economic Behavior, 31, 191–219.
Mas-Colell, A. (1984). On the theorem of Schmeidler. Journal of Mathematical Economics, 13, 201–206.
Rubinstein, A., & Wolinsky, A. (1994). Rationalizable conjectural equilibrium: Between Nash and rationalizability. Games and Economic Behaviour, 6, 299–311.
Schmeidler, D. (1973). Equilibrium points of nonatomic games. Journal of Statistical Physics, 17, 295–300.
Stokey, N. L., Lucas, R. E, Jr, & Prescott, E. C. (1989). Recursive methods in economic dynamics. Cambridge: Harvard University Press.
Vind, K. (1964). Edgeworth-allocations is an exchange economy with many traders. International Economic Review, 5, 165–177.
Weintraub, G. Y., Benkard, C. L., & Van Roy, B. (2005). Oblivious equilibrium: A mean field. Advances in neural information processing systems. Cambridge: MIT Press.
Wieczorek, A. (2004). Large games with only small players and finite strategy sets. Applicationes Mathematicae, 31, 79–96.
Wieczorek, A. (2005). Large games with only small players and strategy sets in Euclidean spaces. Applicationes Mathematicae, 32, 183–193.
Wieczorek, A., & Wiszniewska, A. (1999). A game-theoretic model of social adaptation in an infinite population. Applicationes Mathematicae, 25, 417–430.
Wiszniewska-Matyszkiel, A. (2000). Existence of pure equilibria in games with nonatomic space of players. Topological Methods in Nonlinear Analysis, 16, 339–349.
Wiszniewska-Matyszkiel, A. (2002). Static and dynamic equilibria in games with continuum of players. Positivity, 6, 433–453.
Wiszniewska-Matyszkiel, A. (2003). Static and dynamic equilibria in stochastic games with continuum of players. Control and Cybernetics, 32, 103–126.
Wiszniewska-Matyszkiel, A. (2005). A dynamic game with continuum of players and its counterpart with finitely many players. In A. S. Nowak & K. Szajowski (Eds.), Annals of the international society of dynamic games (Vol. 7, pp. 455–469). Basel: Birkhäuser.
Wiszniewska-Matyszkiel, A. (2008a). Common resources, optimality and taxes in dynamic games with increasing number of players. Journal of Mathematical Analysis and Applications, 337, 840–841.
Wiszniewska-Matyszkiel, A. (2008b). Stock market as a dynamic game with continuum of players. Control and Cybernetics, 37(3), 617–647.
Wiszniewska-Matyszkiel, A. (2010). Games with distorted information and self-verification of beliefs with application to financial markets. Quantitative Methods in Economics, 11(1), 254–275.
Wiszniewska-Matyszkiel, A. (2014a). Open and closed loop nash equilibria in games with a continuum of players. Journal of Optimization Theory and Applications, 16, 280–301. doi:10.1007/s10957-013-0317-5.
Wiszniewska-Matyszkiel, A. (2014b). When beliefs about future create future—Exploitation of a common ecosystem from a new perspective. Strategic Behaviour and Environment, 4, 237–261.
Wiszniewska-Matyszkiel, A. (2015). Redefinition of belief distorted Nash equilibria for the environment of dynamic games with probabilistic beliefs, preliminary version available as preprint Belief distorted Nash equilibria for dynamic games with probabilistic beliefs, preprint 163/2007 Institute of Applied Mathematics and Mechanics, Warsaw University. http://www.mimuw.edu.pl/english/research/reports/imsm/
Institute of Applied Mathematics, Mechanics Warsaw University, Warsaw, Poland
Agnieszka Wiszniewska-Matyszkiel
Correspondence to Agnieszka Wiszniewska-Matyszkiel.
The project was financed by funds of National Science Centre granted by Decision Number DEC-2013/11/B/HS4/00857.
Appendix 1: Proofs of results
Proof of the equivalence Theorems 4 and 5
Proof (of Theorem 4)
In all the subsequent reasonings, we consider player i outside the set of measure 0 of players for whom the condition of maximizing assumed payoff (actual or anticipated—depending on the assumption) does not hold or anticipated payoffs can be infinite.
In the continuum of players case, the statistics of profiles and, consequently, the trajectories corresponding to them are identical for \(\bar{S}\) and all \(\bar{S}^{i,d}\)—profile S with strategy of player i replaced by d. We denote this statistic function by u, while the corresponding trajectory by X.
(a) We show that along the perfect foresight path the equation for the anticipated payoff of player i becomes the Bellman equation for optimization of the actual payoff by player i while \(V_{i}\) coincides with the value function.
Formally, given the profile of the strategies of the remaining players coinciding with \(\bar{S}\), let us define the value function of the player's i decision making problem \(\widetilde{V}_{i}{:}\,\mathbb {T} \rightarrow \overline{\mathbb {R}}\).Footnote 15
$$\begin{aligned} \widetilde{V}_{i}(t)= & {} \sup _{d:\mathbb {T}\rightarrow \mathbb {D} \ d(s)\in D_{i}(s,X(s))\text { for }s\ge t}\sum _{s=t}^{T}P_{i}(d(s),u(s),X(s))\cdot \left( \frac{1}{1+r_{i}}\right) ^{s-t}\\&\quad +\,G_{i}\left( X(T+1)\right) \cdot \left( \frac{1}{1+r_{i}}\right) ^{T+1-t}. \end{aligned}$$
In the finite horizon case, \(\widetilde{V}_{i}\) fulfils the Bellman equation
$$\begin{aligned} \widetilde{V}_{i}(t)=\sup _{a\in D_{i}(t,X(t))}P_{i}(a,u(t),X(t))+ \widetilde{V}_{i}(t+1)\cdot \left( \frac{1}{1+r_{i}}\right) , \end{aligned}$$
with the terminal condition \(\widetilde{V}_{i}(T+1)=G_{i}(X(T+1))\).
In the infinite horizon case, \(\widetilde{V}_{i}\) also fulfils the Bellman equation, but the terminal condition sufficient for the solution of the Bellman equation to be the value function is different. The simplest form of it \(\lim _{t\rightarrow \infty }\widetilde{V}_{i}(t)\cdot \left( \frac{1}{1+r_{i}}\right) ^{t-t_{0}}=0\) (see e.g. Stokey et al. 1989). Here it holds by the assumption that the payoffs are bounded.
If we substitute the formula for \(\widetilde{V}_{i}\) at the r.h.s. of the Bellman equation by its definition, then we get
$$\begin{aligned} \widetilde{V}_{i}(t)= & {} \sup _{a\in D_{i}(t,X(t))}P_{i}(a,u(t),X(t)))+ \left( \frac{1}{1+r_{i}}\right) \\&\cdot \left( \sup _{d:\mathbb {T}\rightarrow \mathbb {D} \ d(s)\in D_{i}(s,X(s))\text { for }s\ge t+1}\sum _{s=t+1}^{T}P_{i}(d(s),u(s),X(s))\cdot \left( \frac{1}{1+r_{i}} \right) ^{s-(t+1)}\right. \\&\quad \left. +\,G_{i}\left( X(T+1)\right) \cdot \left( \frac{1}{1+r_{i}}\right) ^{T+1-(t+1)}\right) . \end{aligned}$$
Note that the last supremum is equal to \(\widetilde{V}_{i}(t+1)\), but also to \(v_{i}(t+1,(X,u))\). Since (X, u) is the only history in the belief correspondence along all profiles \(\bar{S}^{i,d}\), it is also equal to \(V_{i}(t+1,\{(X,u)\})\).
$$\begin{aligned} \widetilde{V}_{i}(t)= & {} \sup _{a\in D_{i}(t,X(t))}P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}} \right) \cdot V_{i}\left( t+1,B_{i}\left( t,a,H^{\bar{S}^{i,d_{t,a}}}\right) \right) \\= & {} \sup _{a\in D_{i}(t,X(t))}\Pi _{i}^{e}(t,\bar{S}^{i,d_{t,a}}), \end{aligned}$$
where by \(d_{t,a}\) we denote a strategy of player i such that \(d(t,X(t),H^{\bar{S}})=a\) and at any other point of the domain it coincides with \(\bar{S}_{i}\).
Let us note that for all t, the set
$$\begin{aligned}&\mathop {\mathrm{{Argmax}}}_{d:\mathbb {T}\rightarrow \mathbb {D}\ d(s)\in D_{i}(s,X(s)) \text { for }s\ge t}\sum _{s=t}^{T}P_{i}(d(s),u(s),X(s))\cdot \left( \frac{1}{1+r_{i}}\right) ^{s-t}\\&\qquad +\,G_{i}\left( X(T+1)\right) \cdot \left( \frac{1}{1+r_{i}}\right) ^{T+1-t} \end{aligned}$$
is both the set of open loop forms of strategies of player i being his/her best responses to the strategies of the remaining players along the profiles \(\bar{S}\) and \(\bar{S}^{i,d}\) and the set at which the supremum in the definition of the function \(v_{i}(t,(X,u))\) is attained. We only have to show that \(\bar{S}_{i}(t)\in \mathop {\mathrm{{Argmax}}}_{a\in D_{i}(t,X(t))}\bar{\Pi } _{i}^{e}(t,H^{\bar{S}},\left( \bar{S}^{OL}(t)\right) ^{i,a})\). By the definition, this set is equal to
$$\begin{aligned}&\mathop {\mathrm{{Argmax}}}_{a\in D_{i}(t,X(t))} P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \cdot V_{i}(t+1,B_{i}(t,a,H^{\bar{S}^{i,d_{t,a}}}))\\&\qquad =\mathop {\mathrm{Argmax}}_{a\in D_{i}(t,X(t))} P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \cdot \widetilde{V}_{i}(t+1), \end{aligned}$$
which, by the Bellman equation, defines the value of the best response at time t, which contains \(\bar{S}_{i}(t)\), since \(\bar{S}\) is an equilibrium profile.
(b) An immediate conclusion from (a).
(d) Let \(\bar{S}\) be a pre-BDNE for B being the perfect foresight at \(\bar{S}\) and all \(\bar{S}^{i,d}\). We consider \(\widetilde{V}_{i}\) defined as in the proof of (a).
By the definition of pre-BDNE,
$$\begin{aligned}&\bar{S}_{i}(t)\in \mathop {\mathrm{Argmax}} _{a\in D_{i}(t,X(t))}\bar{\Pi }^{e}\left( H^{\bar{S}},\left( \bar{S}^{OL}(t)\right) ^{i,a}\right) \\&\quad =\mathop {\mathrm{Argmax}}_{a\in D_{i}(t,X(t))} P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \\&\qquad \cdot \max _{d:\mathbb {T}\rightarrow \mathbb {D} \ d(s)\in D_{i}(s,X(s))\text { for }s\ge t+1}\left( \sum _{s=t+1}^{T}P_{i}(d(s),u(s),X(s))\cdot \left( \frac{1}{1+r_{i}}\right) ^{s-(t+1)}\right. \\&\qquad \left. +\,G_{i}( X(T+1)) \cdot \left( \frac{1}{1+r_{i}}\right) ^{T+1-(t+1)}\right) \\&\quad =\mathop {\mathrm{Argmax}}_{a\in D_{i}(t,X(t))} P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \cdot \widetilde{V}_{i}(t+1). \end{aligned}$$
If we add the fact that
\(\widetilde{V}_{i}(t)=\max _{a\in D_{i}(t,X(t))}P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \cdot \widetilde{V}_{i}(t+1)\), then, by the Bellman equation, the set \(\mathop {\mathrm{Argmax}}_{a\in D_{i}(t,X(t))} P_{i}(a,u(t),X(t))+\left( \frac{1}{1+r_{i}}\right) \cdot \widetilde{V}_{i}(t+1)\) represents the value of the optimal strategy of player i at time t, given u(t) and X(t). Since we have this property for a.e. i, and the profile defined in this way is a Nash equilibrium.
(c) By (d) and (a). \(\square \)
Consider player i for whom payoffs are bounded and anticipated payoffs are finite. In repeated games, the state set \(\mathbb {X}\) is a singleton, therefore the only variable influencing future payoffs is the statistic of the profile (via dependence of the strategies of the remaining players on the history).
By the fact that beliefs are independent of a, the optimization in the definition of pre-BDNE is equivalent to optimization of \(P_{i}(a,u(t),X(t))\) only, since the latter term in the definition of anticipated payoff is independent of a and finite (since anticipated payoffs are finite).
Payoffs of player i are bounded, therefore their supremum is finite.
In games with a nonatomic space of players a decision of a single player does not influence the statistic, therefore the optimization of player i while calculating Nash equilibrium can be decomposed into optimization of \(P_{i}(a,u(t),X(t))\) at each time instant separately, as at pre-BDNE.
If strategies of the remaining players do not depend on histories of the game, then the current decision of the player does not influence his/her future payoffs, therefore the optimization of player i at a Nash equilibrium can be decomposed into optimization of \(P_{i}(a,u(t),X(t))\) at each time instant separately, as at pre-BDNE.
\(\square \)
Detailed calculations for examples
In this subsection of the appendix we formulate in a more precise way laborious formulations and proofs of results for examples.
Common ecosystem
In this section we formulate precise results for Example 1.
We start by the continuum of players game and we show that there are beliefs for which the n-player Nash equilibrium can be obtained as a pre-BDNE.
Let \((\mathbb {I},\mathfrak {I},\lambda )\) be the unit interval with the Lebesgue measure.
Consider any belief correspondence such that for every \(i\in \mathbb {J}\subset \mathbb {I}\) of positive measure, \(t\in \mathbb {T}\), \(H\in \mathbb {H}_{\infty }\), there exist \(\varepsilon _{1},\varepsilon _{2},\varepsilon _{3}>0\) with \(\varepsilon _{1}<\left( 1+\zeta \right) \) and constants \(0<\delta (i,t,H)<\varepsilon _{1}\) fulfilling if \(a>\left( \left( 1+\zeta \right) -\delta (i,t,H)\right) \cdot X(t) \), then there exists a history \((X,u)\in B_{i}(t,a,H)\) for which we have for all \(s>t, X(s)=0\), otherwise for all \((X,u)\in B_{i}(t,a,H)\), for all \(s>t\), we have \(X(s)\ge \varepsilon _{2}\cdot e^{-\varepsilon _{3}s}\).
At each profile S being a pre-BDNE for this belief, for a.e. \(i\in \mathbb {J}\), we have \(S_{i}^{OL}(t)\le \left( (1+\zeta )-\delta (i,t,H)\right) \cdot X(t)\), and \(X(t)>0\) for every t.
Obviously, for every player \(i\in \mathbb {J}\), the decision at time t maximizing \(\bar{\Pi }_{i}^{e}\) for any profile of decisions of the remaining players is \(\left( (1+\zeta )-\delta (i,t,H)\right) \cdot X(t)\)—the maximal extraction that does not result in \(V_{i}(t+1,B_{i}(t,a,H^{S}))=-\infty \), since \(\bar{\Pi }_{i}^{e}(t,H^{S},S^{OL}(t))\ge \sum _{s=t}^{T}\ln \left( \left( (1+\zeta )-\delta (i,t,H)\right) \cdot \varepsilon _{2}\cdot e^{-\varepsilon _{3}t}\right) \cdot (1+r)^{-t}\) \(\ge \sum _{s=t}^{T} \left( -\varepsilon _{3}\cdot t + \ln \left( \left( (1+\zeta )-\varepsilon _{1}\right) +\ln ( \varepsilon _{2}) \right) \right) \cdot (1+r)^{-t}> -\infty \).
We have \(X(t_{0})>0.\)
If \(\nu \!=\!\int _{\mathbb {J}}\delta (i,t,H)d\lambda (i)\), then \(X(t+1)\ge X(t)\cdot \left( 1-\left( \left( (1\!+\!\zeta )\!-\!\nu \right) -\zeta \right) \right) =X(t)\cdot \nu >0\), so \(X(t)>0\) implies \(X(t+1)>0\). \(\square \)
It may seem that the pessimistic approach of players in the concept of BDNE presented in this paper is the reason why the ecosystem can be saved also in the continuum of players game. Nevertheless, also in the case of purely probabilistic beliefs and expected optimal future payoff used instead of guaranteed optimal future payoff, which are used in a redefinition of the concept of BDNE in Wiszniewska-Matyszkiel (2015), a similar result can be obtained.
Now we consider the n-players game and show that we can obtain the destructive continuum of players equilibrium as a pre-BDNE.
First kind of beliefs is similar to the one in preliminary description of the concept in Sect. 2.2.
Let \((\mathbb {I},\mathfrak {I},\lambda )\) be an n-element set with the normalized counting measure.
Consider a belief correspondence such that there exists t such that for a.e. i and every H, \(B_{i}(t,a,H)\) contains a history (X, u) for which for some \(s>t\) \(X(s)=0\). Then every dynamic profile, including profiles S such that \(X^{S}(\bar{t})=0\) for some \(\bar{t}>t\), is a pre-BDNE for B.
For a.e. player at every t for every a, \(V_{i}(t+1,B_{i}(t,a,H))=-\infty \). Therefore, each choice of the players is in the best response to such a belief. \(\square \)
Now we check self-verification of the counter-intuitive beliefs considered in Propositions 10 and 11.
Let \(\mathbb {J}\) be a set of players of positive measure. Assume that the beliefs of the remaining players \(\backslash \mathbb {J}\) are independent of their own actions and they do not contain any history with \(X(t)=0\) for some t. There exists a belief correspondence such that the assumptions of Proposition 10 are fulfilled for all players from \(\mathbb {J}\), for which the beliefs of players from \(\mathbb {J}\) are perfectly self-verifying against the beliefs of the remaining players and the whole belief correspondence is perfectly self-verifying.
Any profile S such that \(S_{i}^{OL}(t)<(1+\zeta ) X(t) \) for i in a set of positive measure, is a BDNE.
Let S be a profile such that for some t, \(X^{S}(t)=0\). Such a profile is a BDNE.
(a) We are going to construct such a belief. For players from \(\backslash \mathbb {J}\) the belief correspondence is not dependent on a and disjoint from the set \(\left\{ (X,u){:}\,\exists t\ X(t)=0\right\} \). We specify it after some calculations.
Let \(\nu =\lambda (\mathbb {J})\).
Let us consider a strategy profile S such that every player \(i\in \mathbb {J}\) chooses \(S_{i}(t,x,H)=\alpha \cdot x\) for some \(\alpha \ge \zeta \), while every \(i\notin \mathbb {J}\) chooses \(S_{i}(t,x,H)=(1+\zeta )\cdot x\). Then the statistic of this profile at time t is equal to \(u(t)=X(t)\left( (1+\zeta )(1-\nu )+\nu \alpha \right) \) while the trajectory corresponding to it fulfils \(X(t+1)=X(t)-\max \{0,u(t)-\zeta X(t)\}=X(t)\cdot \nu \cdot \left( (1+\zeta )-\alpha \right) \). Let us take \(\alpha =\zeta +\frac{1}{2}\).
We define a condition (*\(_{t,c,H}\)) for every \(s>t\) every history fulfils \(X(s+1)\ge X(s)\cdot \nu \cdot \left( (1+\zeta )-c \right) \) and there exists a history such that \(u(s)=X(s)\left( (1+\zeta )(1-\nu )+\nu c \right) \) and \(X(s+1)=X(s)\cdot \nu \cdot \left( (1+\zeta )-c \right) \). We consider any family of belief correspondences with \(B_{i}(t,a,H)\), for a written as cX(t), such that for every t and H for all \(i\notin \mathbb {J}\) (*\(_{t,c,H}\)) holds whatever c is, for all \(i\in \mathbb {J}\) (*\(_{t,c,H}\)) holds for \(c\le \alpha \), while for \(c>\alpha \), \( B_{i}(t,cX(t),H)\) contains a history with \(X(s)=0\) for some \(s>t\).
Since for such a belief, the decision maximizing \(\bar{\Pi }_{i}^{e}\) for every player i from \(\mathbb {J}\), is \(\alpha \cdot X(t)\), while for players from \(\backslash \mathbb {J}\) the optimal choice is \((1+\zeta )\cdot X(t)\), therefore, all the pre-BDNE for this belief fulfil the above assumptions, which means both kinds of perfect self-verification we wanted to prove.
(b) The profile S is a BDNE for the beliefs defined in the proof of (a), which can be proven similarly to Proposition 10. Since these beliefs are perfectly self-verifying, S is a BDNE.
(c) Consider any belief correspondence such that there exists \(\bar{t}\) such that for a.e. i and every H, the belief \(B_{i}(\bar{t},a,H)\) contains an admissible history \(H^{\prime }\) (i.e. such that \(H^{\prime }=H^{\tilde{S}}\) for some profile \(\tilde{S}\)) for which for some \(s>\bar{t}, X(s)=0\) and for every \(s<t\), the belief \(B_{i}(\bar{s},S^{OL}_i(s),H^S)\) contains \(H^S\). Then our profile S is a BDNE for these beliefs. \(\square \)
It requires some effort to write the standard Prisoner's Dilemma game in a form suiting the games described in this paper, which concern rather games with many players.
In order to use the notation of this paper, we consider the whole profile as the statistic function and write the payoff function as follows.
\(P_{i}(a,u)= \left\{ \begin{array}{ll} C &{}\quad \text {for}\,\, a,u_{\sim i} =1;\\ A~&{}\quad \text {for}\,\, a=1, \ u_{\sim i}=0;\\ R &{}\quad \text {for}\,\, a=0, \ u_{\sim i}=1;\\ N &{}\quad \text {for}\,\, a,u_{\sim i}=0, \end{array} \right. \)
where \(u_{\sim i}\) denotes the coordinate of statistic corresponding to the strategy of the of the other player.
We describe pre-BDNE at which players cooperate in the repeated Prisoner's dilemma example, and, among others, we show that a pair of cooperative strategies can be obtained as a BDNE.
If both \(r_{i}\) are small enough, then:
the pair of grim trigger strategies (GT,GT) is a BDNE for \(\bar{B}\) and a Nash equilibrium; moreover, a set of \(r_{i}\) for which (GT,GT) is a BDNE is larger than that for which it is a Nash equilibrium;
all profiles of the same open loop form as (GT,GT), including the pairs (CE,CE) and (GT,CE), are also BDNE for some perfectly self-verifying beliefs for which any profiles of another open loop form are not pre-BDNE;
beliefs \(\bar{B}\) are perfectly self-verifying.
Consider beliefs \(\bar{B}\) defined as follows. For every \(H \in \mathbb {H}_{\infty }\) \(\bar{B}_{i}(t,0,H)=\{H' \in \mathbb {H}_{\infty }{:}\, H'|_t = H|_t \text { and } \forall s>t \ (H(s))_{\sim i}=0\}\) \(\bar{B}_{i}(t,1,H)=\{H' \in \mathbb {H}_{\infty }{:}\, H'|_t = H|_t \text { and }\forall s>t \ (H(s))_{\sim i}=1\}\).
(a) We start by proving that the profile (GT,GT) is a Nash equilibrium.
Consider any time instant t and player's best response to GT from this time instant on. Assume that at time t, player chooses to defect. Then his/her maximal payoff for such a profile from time t on is \(R+\sum _{s=t+1}^{\infty } \frac{N}{(1+r_{i})^{(s-t)}}\), while for the GT strategy \(C+\sum _{s=t+1}^{\infty } \frac{C}{(1+r_{i})^{(s-t)}}\).
The condition for GT to be optimal is, therefore, \((R-C)\cdot r_{i}<C-N\), which holds for small \(r_{i}\).
Now we prove that (GT,GT) is also a BDNE for \(\bar{B}\).
Consider time instant t and history H. The function of guaranteed anticipated value are
$$\begin{aligned} V_{i}(t,B_{i}(t,0,H))= & {} \sum _{s=t}^{\infty } \frac{N}{(1+r_{i})^{(s-t)}}=\frac{(1+r_{i})\cdot N}{r_{i}}\quad \hbox {and} \\ V_{i}(t,B_{i}(t,1,H))= & {} \sum _{s=t}^{\infty } \frac{R}{(1+r_{i})^{(s-t)}}=\frac{(1+r_{i})\cdot R}{r_{i}}. \end{aligned}$$
Therefore, for player i, without loss of generality player 1,
$$\begin{aligned} \bar{\Pi }_{1}^{e}(t,[0,1])=R+\frac{N}{r_{1}},\quad \hbox {while}\quad \ \bar{\Pi }_{1}^{e}(t,[1,1])=C+\frac{R}{r_{1}}. \end{aligned}$$
This yields the inequality guaranteeing that cooperation is better than defection \((R-C)\cdot r_{i}<R-N\). If both \(r_{i}\) fulfil this, then (GT,GT) is a BDNE for \(\bar{B}\) and no profile with different open loop form can be a pre-BDNE for \(\bar{B}\).
(b) By (a) and the fact that both strategies CR and CE behave in the same way whenever they do not face defection of the other player, which leads to the same open loop form as (GT,GT).
(c) Perfect self-verification of \(\bar{B}\) is a consequence of this and the fact that at every pre-BDNE for \(\bar{B}\), the history is \(H^{\text {(GT,GT)}}(t)\equiv (1,1)\), which belongs to the beliefs at each time instant. \(\square \)
Appendix 2: Games with a measure space of players
Games with a measure space of players are usually perceived as a synonym of games with infinitely many players called also large games.
In order to make it possible to evaluate the influence of the infinite set of players on aggregate variables, a measure is introduced on a \(\sigma \)-field of subsets of the set of players. However, the notion games with a measure space of players encompasses also games with finitely many players, where e.g. the counting or normalized counting measure on the power set may be considered.
Introducing normalized counting instead of the usual counting measure allows to approximate a sequence of games in which the finite number of players increases, by their limit game with a continuum of players. This approximation mimics the process of discovering the real nature of decision making problems in which a large number of individuals are involved. During this process abstract aggregates playing the role of decision makers in order to decrease dimensionality (like "Mankind", "Poor countries" and "Rich countries", "Representative consumer" etc.) are replaced by the actual decision makers.
Large games were introduced in order to illustrate situations where the number of agents is large enough to make a single agent from a subset of the set of players (possibly the whole set) insignificant—negligible—when we consider the impact of his action on aggregate variables, while joint action of a set of such negligible players is not negligible. This happens in many real-life situations: at competitive markets, stock exchange, or while we consider emission of greenhouse gases and similar global effects of exploitation of the common global ecosystem.
Although it is possible to construct models with countably many players illustrating the phenomenon of this negligibility, they are very inconvenient to cope with. Therefore, simplest examples of large games are so called games with a continuum of players, where players constitute a nonatomic measure space, usually unit interval with the Lebesgue measure.
The first attempts to use models with a continuum of players are contained in Aumann (1964, 1966) and Vind (1964).
Some theoretical works on large games are Schmeidler (1973), Mas-Colell (1984), Balder (1995), Wieczorek (2004) and Wieczorek (2005), Wieczorek and Wiszniewska (1999) and Wiszniewska-Matyszkiel (2000).
The general theory of dynamic games with a continuum of players is still being developed: there are the author's papers Wiszniewska-Matyszkiel (2002) and Wiszniewska-Matyszkiel (2003) and a new branch of mean-field games started by Weintraub et al. (2005) and Lasry and Lions (2007). There are also interesting applications of such games in various economic problems.
The comparison between a sequence of games with finitely many players and their limit game with a continuum of players, illustrating the process of disclosing the real decision makers, as mentioned at the beginning of this section, in the context of dynamic games, can be found in Wiszniewska-Matyszkiel (2005) and Wiszniewska-Matyszkiel (2008a).
Appendix 3: Incomplete, distorted or ambiguous information
Other concepts of equilibria taking incomplete, ambiguous or distorted information into account
There are many concepts taking beliefs into account, which may seem related to the research presented in this paper. They usually are applicable in games with stochastic environment or randomness caused by using mixed strategies. In fact none of them is applicable to dynamic games.
The first are Bayesian equilibria, introduced by Harsanyi (1967). They are used in games with uncertainty about payoff functions of remaining players depending on their types with common prior distribution of types and common prior belief in rationality of players. The prior is assumed to be known correctly by all the players.
That approach is continued by e.g. Battigalli and Siniscalchi (2003) using \(\Delta \)-rationalizability, being an iterative procedure of eliminating type-strategy pairs in which the strategy is strictly dominated according to player's beliefs or which are contradicted by a history of play.
There are also conjectural equilibria of Battigalli introduced in Battigalli (1987), eventually published in Battigalli and Guaitoli (1997), in which each player's action is consistent with some conjecture about behaviour of the other players based on observed signal—it is a best responses to this conjecture. That concept does not assume that the conjectures about the behaviour of the others are rational. This problem is considered by Rubinstein and Wolinsky (1994) and it gave birth to the concept of rationalizable conjectural equilibrium, which is between Nash equilibrium and the concept of rationalizability.
Assumptions of \(\Delta \)-rationalizability are weakened in the concept of cursed equilibrium, considered by Eyster and Rabin (2005), related to Bayesian equilibria, in which players correctly assess the common prior probability distribution of the other players' types, and therefore, strategies, but they neglect the information revealed during the play by the other players' actions. E.g. in the "market for lemons" model they neglect the fact that offering a car at a price lower than price of a "peach" is disclosure of information of bad quality, and in such a case they still assign positive probability to the fact that such a car is a "peach", which leads to "the winner's curse".
One of the other approaches are self-confirming equilibria introduced by Fudenberg and Levine (1993). That concept is designed to extend Nash equilibrium in extensive form games in which players' information about majority of parameters (the whole structure of the extensive form of the game and probabilities of natures moves) of the game is perfect and the only imperfection of information concerns strategies chosen by the other players. This concept is further studied by, among others, Azrieli (2009b).
The concept of belief distorted Nash equilibria can also be compared to conjectural categorical equilibria of Azrieli (2009a) and stereotypical beliefs of Cartwright and Wooders (2012), applicable in games with many players. Those concepts are designed to model situations in which every player takes into account aggregate variables over some categories of players and form assumptions about behaviour of the other players based on a group to which they belong. In those concepts stereotypes are consistent with reality, so they model rather aggregation than distortion of information.
Another approach, which seems to be related to a large extent to the approach presented in this paper, are subjective equilibria. This concept is introduced by Kalai and Lehrer (1993) and Kalai and Lehrer (1995) (although the concept of subjectivity in correlated equilibria is considered in earlier papers of Aumann (1974) and Aumann (1987)). In subjective equilibria players' beliefs have the form of environmental response function which describes the distribution of possible reactions of the system at the considered time instant. The game is assumed to be repeated many times and beliefs at each stage depend on histories of player's observations of his/her actions and their immediate consequences. An assumption that beliefs are not contradicted by observations is added, which implies that if a player uses some form of Bayesian updating, then his/her belief cannot be contradicted by the observed history of the game. This concept is explained in more detail in Sect. 4, where we compare it to BDNE—the new kind of equilibrium defined in this paper.
It is worth emphasizing that the concept of subjective equilibrium, similarly to the concept of BDNE, can be used in situations where information is seriously distorted. It can deal with seriously distorted information e.g. unknown structure of the game or the number of players. It can even work in extremal situation, when a player does not know, that he/she participates in a game, i.e. that there is an interaction with other conscious decision makers, not only optimization in an abstract "system". However, the concept of subjective equilibrium is in fact static.
The concept of BDNE considered in this paper is not a refinement of Nash equilibrium or subjective equilibrium. Neither is it a direct extension of the concept of subjective equilibrium. In this paper we consider a more general class of games—multistage games. Therefore, unlike in repeated games we can model players' ambiguity about future behaviour of the system in which the game is played. To reflect the reality of games in which in each stage of the game the moves are simultaneous, we assume that decision of a player is independent of current choice of another player at the same stage. Besides, beliefs in this approach do not form probability distributions but they take into account ambiguity (or model uncertainty).
The primitives of the concept of pre-BDNE and BDNE presented in this paper appear in Wiszniewska-Matyszkiel (2008b) in a model of stock exchange, in a trivial formulation, without explicit formulation of belief correspondences. That study of a stock exchange is continued in Wiszniewska-Matyszkiel (2010). A simpler version of the concepts, suited to a specific problem, appears also in a thorough analysis of a simple common ecosystem game in Wiszniewska-Matyszkiel (2014b), studying various policy implications.
The form of beliefs and anticipated payoffs considered in this paper
In this paper, we concentrate on player's beliefs which have a form of a multivalued correspondence interpreted as possible future trajectories of parameters unknown to the player.
Such beliefs describe decision making problems where, as in many real life situations, decision makers are unable to assign precisely a probability distribution over a set of numerous possible results or do not regard it as necessary.
The fact that people's choices in some situations contradict the assumption that decision maker assigns some precise subjective probability distributions to ambiguous events was first illustrated by the Ellsberg paradox (Ellsberg 1961). Many repetitions of Ellsberg's experiment show that people usually prefer less ambiguous situations.
Such a behaviour was explained by Gilboa and Schmeidler (1989) and a mathematical model of decision making under ambiguity was proposed. If the decision making problem is such that exact probability distribution (subjective or objective) cannot be assigned in some obvious way, instead of one probability distribution a set of probability distributions regarded as possible is considered. Given such a structure of the problem, a player chooses decision which maximizes his/her expected payoff in the worst possible case (and a new kind of utility—maximin utility/payoff is, therefore, used instead of usual expected utility/payoff).
There are many papers continuing the approach of Gilboa and Schmeidler (1989): e.g. Klibanoff et al. (2005), Gilboa et al. (2010) and Gilboa and Marinacci (2011). It also appears in game theoretic applications—the concept of ambiguous games is introduced in Marinacci (2000). Maximin utility has already been used in dynamic games context (among others Hennlock (2008)) and it have already appeared in attempts to redefine the concept of Nash equilibria in order to model ambiguity (Battigalli et al. 2012). In those papers, although decision makers or players are generally expected utility or payoff maximizers, there is ambiguity concerning their priors, which leaded to a whole set of possible probability distributions and the worst of possible expected payoffs is maximized, which corresponds to our anticipated payoff.
Generally, in the context of BDNE, for simplicity of first formulation of the concept, two extreme cases of ambiguity are considered. In this paper the probability distributions are assumed to be trivial, while there may be many of them regarded as possible. This approach implies very simple and intuitive formulation of self-verification. The author also proposes a redefinition of the concept of BDNE in games with distorted information of purely probabilistic beliefs, with one probability distribution assigned, in Wiszniewska-Matyszkiel (2015). In that case the calculation of expected utility given beliefs is simpler, but the definition of self-verification is more complicated and not so intuitive. Obviously, a more general definition encompassing both aspects is an obvious next stage of research.
Besides simplicity of formulation, there is also one more reason of considering beliefs in the form of set of possible histories instead of a probability distribution. In the Ellsberg paradox the probabilistic character is inherent to the problem considered—drawing a ball from an urn. Conversely, if we consider non-repeated games in which players use pure strategies only, the problem faced by a player not knowing the strategies chosen by his/her opponents does not appear to be probabilistic. In a game with more than one Nash equilibrium a player may consider beliefs of the form "my opponents play their Nash equilibrium profile strategies, but I have no information which Nash equilibrium". Such a belief can be described by a multivalued correspondence of results considered as possible and the maximin approach "I choose a strategy maximizing my payoff in the worst possible case" seems natural.
Note that in games with many players, the players cease to choose mixed strategies and pure strategy equilibria exist (see e.g. Balder 1995 or Wiszniewska-Matyszkiel 2000). Therefore, expecting deterministic results in such a case seems justified.
Conclusions and further research
In this paper a new notion of equilibrium—Belief Distorted Nash Equilibrium (BDNE)—is introduced.
It is a notion to deal with situations in which players' information about the game they play, including elements crucial from their point of view, is far from being complete, it may be ambiguous or even distorted.
This lack of crucial information of a player encompasses the way in which the other players and/or a dynamic system in which the game takes place, are going to react in future to his/her current decision.
In this concept we assume that players formulate beliefs—sets of future scenarios regarded as possible, they best respond to those beliefs, and, consequently, the model of reality that they assumed and formulated in beliefs, is confirmed by observed data.
Existence and equivalence theorems are proved and various concepts of self-verification are introduced. Among other things, we obtain that in games with a continuum of players, to which the model is especially applicable, the set of BDNE for the perfect foresight belief coincides with the set of Nash equilibria.
The theoretical results are also illustrated by examples: a model of exploitation of a common renewable resource being the basis of existence of the population using it, a Minority Game, a model of a market with many firms and a repeated Prisoner's Dilemma.
The analysis of the examples, among other things, indicates the importance of proper ecological education and shows that some beliefs, even if they are inconsistent with objective knowledge based on some theoretical sciences, can be regarded as rational in some sense if they have the property of self-verification.
It is also worth recalling that, if compared to looking for a Nash equilibrium in a dynamic game, finding a pre-BDNE, given beliefs, is simpler.
Finding a Nash equilibrium in a dynamic game requires solving a set of dynamic optimization problems coupled by finding a fixed point in a functional space. As it is well known, it is often impossible, even if the dynamic optimization problem which is the one-player analogue of the game, can be easily solved.
Finding a BDNE requires only finding a sequence of static Nash equilibria—each of them requires solving a set of simple static optimization problems coupled only by the value of statistic u at the considered time instant. Of course, some preliminary work has to be done to calculate anticipated payoffs, but in the framework considered in this paper, it is again a sequence of static optimizations.
An obvious continuation of the research contained in this paper is to redefine the concept of BDNE for other forms of beliefs: beliefs which have standard probabilistic form and the mixed case between the approach presented in this paper and the probabilistic approach. Although the redefinition of the concept of pre-BDNE is not very difficult and not controversial, the concepts of self-verification and BDNE may be disputable. The problem has already been addressed by the author in the first case and preliminary results are in Wiszniewska-Matyszkiel (2015).
Wiszniewska-Matyszkiel, A. Belief distorted Nash equilibria: introduction of a new kind of equilibrium in dynamic games with distorted information. Ann Oper Res 243, 147–177 (2016). https://doi.org/10.1007/s10479-015-1920-7
Issue Date: August 2016
Distorted information
Noncooperative games
Games with a continuum of players
n-player dynamic games
Belief-distorted Nash equilibrium (BDNE)
Pre-BDNE
Subjective equilibrium
Cournot oligopoly
Competitive equilibrium
Minority Game
Prisoner's Dilemma
|
CommonCrawl
|
Is there a type of star that emits relatively monochromatic visible light?
I was thinking about this question on Worldbuilding and the answers involving monochromatic ambient light, which got me wondering if there were any stars that emitted relative monochromatic light (a "very" narrow band, although I don't have a definition, but something narrow enough to produce the effect in that photo, atmospheric effects notwithstanding).
So my question is: Is there any type of star that emits an extremely narrow band of visible light (not necessarily exclusively visible, non-visible wavelengths can be included) that we have either observed or theorized to exist? "Narrow" as in, on par with a low pressure sodium-vapor lamp.
My Google searches are hindered by a lot of noise, mostly things like monochromatic star patterns, and a few hits regarding non-visible radiation.
Jason CJason C
$\begingroup$ Related question here. astronomy.stackexchange.com/questions/10510/… $\endgroup$
In some sense, yes.
UV continuum $\rightarrow$ UV line
This is probably not the answer you're looking for, but massive stars (O and B stars) emit a very hard UV spectrum of light (a UV "continuum", i.e. a broad range of wavelengths). Since these stars don't live for long (because they burn their fuel fast), they tend to be located in the gas clouds from which they were born. The UV light ionizes the enshrouding hydrogen, but the protons and electrons quickly recombine. If the electron goes directly to the ground state, another UV photon is emitted, capable of ionizing another hydrogen atom. In most cases, however, the electron "cascades" down multiple levels, emitting photons of different energies, which may excite other atoms.
It turns out that for every 3 ionizing photons emitted by the star, 2 will eventually — after several interactions — result in the photon corresponding to the energy difference between the first excited state and the ground state; the so-called "Lyman $\alpha$" photon, with a wavelength of 1216 Ångström (121.6 nm). Although there is some broadening due to thermal motion of the atoms and, in particular, due to the resonant scattering of Lyman $\alpha$ on neutral hydrogen, the result is that most of the light from these stars is converted into a single, very narrow (of the order of a few Ångström) emission line, i.e. very close to monochromatic light.
These stars are probably rarely surrounded by planets (because the radiation pressure will tend to blow away the particles used to build planets), and even if they were, these processes happen farther away from the stars than the planets would be. But if you observe a young galaxy, whose spectrum is dominated by young stars, the Lyman $\alpha$ emission line is often the only light visible.
UV line $\rightarrow$ visible line
The Lyman $\alpha$ line is still in the UV, and thus invisible to humans. However, since light is redshifted as it travels through the expanding Universe, Lyman $\alpha$-emitting galaxies sufficiently far away will have their emission line carried into the visible range. Since the shortest wavelength we can see is 400 nm, it must be redshifted by a factor $400/121.6 = 3.3$ (i.e. $z=2.3$), corresponding to a distance of 18.6 billion lightyears (but if it's farther away than 25.5 billion lightyears, it will shift into the infrared and be invisible again). Note, though, that this is only in principle; such galaxies are far too dim to be visible to the naked eye. Too see them, you must use a telescope and a camera.
Like the other answer stated, stars usually emit a good approximation to black body radiation.
However, under certain conditions, a star can emit monochromatic microwaves. This is known as a MASER. For this to happen, the star has to be either very cool (since the emission is from molecular gas) or embedded in a star forming region - in the latter case, it's not the actual star emitting the radiation but the molecular cloud it is embedded in.
Now, the "normal" starlight is still present even if you have a MASER, and the emission is highly directional. So you might not see the MASER if you're looking from the wrong direction at the star, and it will not be purely monochromatic, but the MASER emission might be quite dominant.
AlexAlex
Not the answer you're looking for? Browse other questions tagged star light or ask your own question.
Why are there no green stars?
regarding spectral type of a star
What is visible light colour output of different stars?
Are there stars that don't emit visible light?
Is there a place in the known universe that there would be no visible light from other stars/galaxies?
How do radio telescopes gather information about visible light?
Can a star emit heat but no visible light?
Nightsky observation : a light/star suddenly disappeared, why?
|
CommonCrawl
|
Localizing flow models with Slepian functions
Flow models: synthetic, numerical and inverted
Decomposition of flow models
Analysis of the flow separation
Investigation of regional variation in core flow models using spherical Slepian functions
Hannah F. Rogers1Email authorView ORCID ID profile,
Ciarán D. Beggan2View ORCID ID profile and
Kathryn A. Whaler1
By assuming that changes in the magnetic field in the Earth's outer core are advection-dominated on short timescales, models of the core surface flow can be deduced from secular variation. Such models are known to be under-determined and thus require other assumptions to produce feasible flows. There are regions where poor knowledge of the core flow dynamics gives rise to further uncertainty, such as within the tangent cylinder, and assumptions about the nature of the flow may lead to ambiguous patches, such as if it is assumed to be strongly tangentially geostrophic. We use spherical Slepian functions to spatially and spectrally separate core flow models, confining the flow to either inside or outside these regions of interest. In each region we examine the properties of the flow and analyze its contribution to the overall model. We use three forms of flow model: (a) synthetic models from randomly generated coefficients with blue, red and white energy spectra, (b) a snapshot of a numerical geodynamo simulation and (c) a model inverted from satellite magnetic field measurements. We find that the Slepian decomposition generates unwanted spatial leakage which partially obscures flow in the region of interest, particularly along the boundaries. Possible reasons for this include the use of spherical Slepian functions to decompose a scalar quantity that is then differentiated to give the vector function of interest, and the spectral frequency content of the models. These results will guide subsequent investigation of flow within localized regions, including applying vector Slepian decomposition methods.
Spherical Slepian functions
Outer core flow
Geostrophic flows
Tangent cylinder
Over 95% of the geomagnetic field observed at Earth's surface arises from the Earth's fluid outer core (e.g., Jacobs 1987). The main field is generated by the geodynamo which converts fluid motion of the liquid iron into electric and magnetic energy. The geodynamo is a complex and self-sustaining magnetohydrodynamic system which is still relatively poorly understood in detail (e.g., Jones 2015). Several geomagnetic issues contribute to this limited knowledge, including the loss of resolution due to upward continuation of the magnetic field from the core–mantle boundary to the Earth's surface, the masking effects of the intervening weakly conductive mantle and the imposition of the signals from the shallow magnetized crust (e.g., Hide 1969; Holme 1998). Due to these restrictions we are unable to image the small spatial scales and the very rapid temporal changes of the core field. Though main field models are continuously improving, thanks to better datasets from both ground-based observatories and satellite missions, there is an inherent limit to the achievable spatial accuracy of the core field around spherical harmonic degree and order 20, as observed at the surface (Hulot et al. 2009).
Large-scale changes in the magnetic field at the Earth's surface over the period of months to years (termed secular variation or SV) can be used as a 'tracer' for the flow of the liquid at the core–mantle boundary (CMB), by assuming the conductivity of the core fluid is sufficiently high such that diffusion is negligible on periods of years to decades; this is known as the 'frozen-flux' hypothesis (Roberts and Scott 1965). By assuming frozen-flux, the SV can be inverted for the flow along the CMB. This allows us to probe some of the dynamical properties of the core, though at the cost of requiring additional assumptions about the structure of the flow itself. This is in order to reduce the ambiguity involved in inverting for two parameters (the eastward and northward flow components) from a single set of observations of the radial SV component (e.g., Backus 1968).
Tangentially geostrophic ambiguous patches used in decomposition. Contours of \(B_r/\cos \theta\) for values less than \(10^6\hbox { nT}\). The shaded regions are the area considered to be ambiguous when the tangentially geostrophic assumption is applied. These regions were defined by considering the locations of closed contours which do not connect to the equator. The \(B_r\) values are from the 2015 IGRF field model (Thébault et al. 2015). This and all other figures in this projection are centered on longitude \(90^{\circ }\)
Location of the tangent cylinder and tangentially geostrophic ambiguous patches. The tangent cylinder can be described by two caps subtending an angle of \(21^\circ\) at the Earth's center (red) projecting the inner core to the surface of the core–mantle boundary (green) along the rotation axis. A schematic illustration of the tangentially geostrophic ambiguous patches is also shown in blue
Decomposition of three synthetic spectra. Three synthetic flow models, with different energy spectra, decomposed using the Shannon number (\(K=1504\)) for the ambiguous patches. The top row (left: increasing; middle: flat; right: decreasing) shows maps of the azimuthal (\(u \phi\)) component of the toroidal flow. The second and third rows illustrate the spatial decomposition. The fourth row shows the summation of the decomposed parts of the flow model, which matches the first row. The bottom row shows the spectral decomposition. Note the different magnitudes of aliasing along the boundaries of each separated region depending on shape of spectra
Decomposition of the numerical dynamo flow simulation in eastward direction (\(u_{\phi}\) component). Decomposition of a snapshot flow from a numerical model with coefficients up to degree 60 (3721 coefficients) using the Shannon number (\(K= 1504\)) to represent the flow within the ambiguous patches
Decomposition of numerical dynamo flow simulation in southward direction (\(u_{\theta}\)). Decomposition of a snapshot flow from a numerical model using the coefficients up to degree 60 (3721 coefficients) using the Shannon number (\(K= 1504\)) to represent the flow within the ambiguous patches
Here we localize such flow models, represented by spherical harmonic coefficients, to a particular region of the CMB surface by decomposing them with scalar spherical Slepian functions (Simons and Dahlen 2006). Spherical Slepian functions have been applied to a number of geophysical and astronomical problems where only partial or noisy datasets covering a fraction of the surface of a sphere are available. The functions provide a best estimate for incomplete spatio-spectral signals, offering an optimal trade-off between spatial and spectral leakage (Simons et al. 2006). They have been used for determining local gravitational changes arising from large earthquakes, the spectral structure of the cosmic microwave background radiation, the study of geodetic datasets containing temporal gaps, and to investigate the global crustal magnetic field (Beggan et al. 2013; Dahlen and Simons 2008; Harig and Simons 2012; Kim and von Frese 2017; Simons et al. 2009).
These studies have all successfully retrieved useful information about the geophysical system under consideration often through operation on a global spherical harmonic model. For example, Harig and Simons (2015, 2016) examine relatively small polar regions within the GRACE and GOCE global gravity models to deduce ice-loss over time. We note there are alternative methods for isolating regions of a spherical harmonic model on a spherical surface, such as spherical cap harmonics or localized basis functions which have been used for modeling the main magnetic field on Earth and other planets (Lesur 2006; Thébault et al. 2006, 2018). Spherical Slepian functions are the only functions that achieve regional separation in a fully analytical, and easily computable framework, depending only on the geometry of the region (Beggan et al. 2013).The double orthogonality of the Slepian functions, over the region of interest and the sphere, is a property that is convenient and very welcome on statistical grounds, for example, when inversions for the source or estimations of the power spectral density of the field components or the overall potential are being made on the basis of actual satellite data (Dahlen and Simons 2008; Plattner and Simons 2013; Simons et al. 2006).
Varying K number of functions representing 'inside' the patches for the numerical dynamo flow simulation decomposition. Decomposition of the numerical dynamo model snapshot into 'inside' and 'outside' patch. The K value is the number of functions included inside the patches and \(3721-(K+1)\) is the number of functions outside the patches. The \(\phi\)-component of the toroidal flow is shown
Decomposition of inverted flow from satellite data in eastward direction (\(u_{\phi}\)). Decomposition of steady flow model obtained from satellite data (up to degree 20) using an 'optimum' value of K = 199 (coefficients out of 441) to represent flow within the patch. The Shannon number (\(K=178\)) produced a similar decomposition, but the unwanted boundary signals were slightly larger compared to the optimum K decomposition, which is shown in this image
Decomposition of inverted flow from satellite data in southward direction (\(u_{\theta}\)). Decomposition of steady flow model obtained from satellite data (up to degree 20) using an 'optimum' value of \(K = 199\) (coefficients out of 441) to represent flow within the patches. The Shannon number (\(K=178\)) produced a similar decomposition, but the unwanted aliasing was slightly larger compared to our chosen 'optimum' K decomposition, which is shown in this image
Example of coefficient splitting from tangentially geostrophic ambiguous patch for inverted satellite data flows. An example of coefficient values for the input, inside tangentially geostrophic ambiguous patches, the complementary outside region and the summed coefficients of the decomposition for the Shannon number of the inverted satellite data. The input and summed values are exactly the same for each coefficient, but there are some coefficients which separate into much larger absolute values than their corresponding input
Decomposition of numerical dynamo simulation only using coefficients that are larger than the input value. Flow maps of the decomposition shown in Fig. 7 when only the absolute value of each coefficients that are bigger than the input coefficients is plotted. There is no longer a distinction between inside and outside the region of interest, and the flow is concentrated in strong bands along the boundaries of the regions. The \(\phi\)-component of the toroidal flow is shown
Decomposition of inverted flow from satellite data plotted as velocity vectors, stream function and poloidal potential. A decomposition of the toroidal and poloidal components of the inverted flow from satellite data, seen in Figs. 7 and 8, as velocity (column one), stream function (column two) and poloidal potential (third column). The input, top row, is split into inside and outside the tangentially geostrophic ambiguous patches, rows two and three, before being summed together in the final row. As with previous decompositions, the input and the summed decomposition are identical and flow is mostly restricted to the region of interest with the decomposed maps
Based on the success of the aforementioned studies, we apply similar techniques to a series of global core surface flow models. This paper focuses on the use of scalar spherical Slepian functions to decompose the flow models in order to investigate how localizing flow within and outside the tangentially geostrophic ambiguous regions (Backus and Le Mouël 1986) affects the overall spatial and spectral distribution of the flow. In Additional file 1, we also show the flow localized to the surface of the cylinder tangent to the inner core and in its complement. It is believed that the cylinder tangent to the inner core has a different flow regime (compared to the lower latitude parts of the outer core) where velocities are significantly enhanced locally (Jones 2015; Livermore et al. 2016).
The motivation for this study was to act as a preliminary investigation into the application of scalar spherical Slepian functions to outer core surface flow. Localizing the scalar potentials of the flow, rather than the vector flow itself, simplifies the matrix multiplications to perform the separation. These investigations should inform subsequent inversions of SV data for flows over localized patches of the CMB. We had hoped that forecasts of the geomagnetic field based on flow advection (Beggan and Whaler 2010) could be improved by using flows confined to regions where the dynamics are 'known' or the flow is less affected by non-uniqueness. Unfortunately, this study indicates that this aim will be difficult to achieve in this manner because strong leakage is observed which is too severe to make meaningful forecasts. Instead, we summarize our attempts to mitigate the leakage while noting that care must be taken when attempting further work in the application of spherical Slepian functions to core surface flows.
In the next section we explain the background and methodology for decomposing flow models using spherical scalar Slepian functions. In "Flow models: synthetic, numerical and inverted" section we describe the three flow model types decomposed—synthetic, numerical and inverted from satellite data—and the results are shown in "Decomposition of flow models" section. Synthetic flow models (maximum spherical harmonic degree and order, \(L=60\)) are initially decomposed to appraise the technique. We proceed to test the decomposition on a high degree (\(L=60\)) model from a geodynamo simulation and then a low degree (\(L=20\)) model from inverting satellite SV data. We discuss our findings in "Analysis of the flow separation" section with consideration of the issues of strong aliasing that we have discovered. We focus on the tangentially ambiguous regions in the paper, but the analysis of the tangent cylinder decomposition is detailed in Additional file 1.
Representing flow models
A compact and convenient way to represent core surface flow is through the use of spherical harmonics, with model coefficients for the toroidal and poloidal scalars describing the flow (e.g., Roberts and Scott 1965). Because the velocity is non-divergent and its radial component across the boundary vanishes, the horizontal velocity vector \(\mathbf{u }_H\) can be expressed in terms of the poloidal and toroidal scalars, S and T, which can be expanded in spherical harmonics, in a spherical polar coordinate system \((r, \theta , \phi )\):
$$\begin{aligned} \mathbf{u }_H = \nabla \times (T\mathbf{r }) + \nabla _H (rS) \end{aligned}$$
$$\begin{aligned} T(\theta , \phi ) = \sum _{l,m} t^m_l Y^m_l(\theta , \phi ) \quad \text {and} \quad S(\theta , \phi ) = \sum _{l,m} s^m_l Y^m_l(\theta , \phi ) \end{aligned}$$
The coefficients \(\{t^m_l, s^m_l\}\) are the flow model coefficients, \(Y^m_l(\theta , \phi )\) are the Schmidt quasi-normalized real spherical harmonics, and l and m are the degree and order, respectively.
The north–south, \(u_\theta\), and west–east, \(u_\phi\), components of the horizontal velocity are
$$\begin{aligned} u_{\theta }= & {} \sum _{l=1}^{L} \sum _{m=0}^{l} \left[ - t ^{m(c)}_l \sin (m\phi ) + t ^{m(s)}_l \cos (m\phi )\right] \frac{m}{\sin \theta } P^m_l(\cos \theta ) \nonumber \\&+ \sum _{l=1}^{L} \sum _{m=0}^{l} \left[ s ^{m(c)}_l \cos (m\phi ) + s ^{m(s)}_l \sin (m\phi )\right] \frac{dP^m_l(\cos \theta )}{d\theta }\nonumber \\ u_{\phi }= & {} \sum _{l=1}^{L} \sum _{m=0}^{l} \left[ - t ^{m(c)}_l \cos (m\phi ) - t ^{m(s)}_l \sin (m\phi )\right] \frac{dP^m_l(\cos \theta )}{d\theta } \nonumber \\&+ \sum _{l=1}^{L} \sum _{m=0}^{l} \left[ - s ^{m(c)}_l \sin (m\phi ) + s ^{m(s)}_l \cos (m\phi )\right] \frac{m}{\sin \theta } P^m_l(\cos \theta ) \end{aligned}$$
where \(P^m_l\) are the associated Legendre polynomials for degree (l) and order (m) and superscripts (c) and (s) denote the coefficients of T and S multiplying \(\cos (m\phi )\) and \(\sin (m\phi )\), respectively.
Decomposing flow models
Spherical harmonics are global functions, but they can be converted by linear transformation into spherical Slepian basis functions that are localized onto two or more regions of the sphere. We summarize the methodology here for clarity but refer the reader to Simons (2010) for a detailed derivation.
Spherical harmonics up to degree and order L are typically expressed as a vector of \((L+1)^2\) elements, each of which is a function of position \((\theta ,\phi )\) on the unit sphere:
$$\begin{aligned} \mathbf{y }({\theta ,\phi })=\left[ \begin{array}{ccccc} Y_{0}^{0}({\theta ,\phi })&\cdots&Y_{l}^{m}({\theta ,\phi })&\cdots&Y_{L}^{L}({\theta ,\phi }) \end{array}\right] ^{{\mathsf {T}}}. \end{aligned}$$
where the ordering of the spherical harmonics \(Y_{l}^{m}\) is arbitrary. The coefficient multiplying the monopole harmonic (\(Y_{0}^{0}\)) is usually ignored (or set to zero) in geomagnetic and core flow studies, but is included here to preserve generality.
On a unit sphere, a potential \(V({\theta , \phi })\) up to degree L is represented in a spherical harmonic basis by a single \((L+1)^{2}\)-dimensional vector of coefficients, \(\mathbf{v }\). The potential on the surface is obtained from these coefficients as:
$$\begin{aligned} V(\theta ,\phi )=\mathbf{v }\cdot \mathbf{y }(\theta ,\phi ). \end{aligned}$$
Spherical Slepian functions provide a different set of orthonormal basis functions written as:
$$\begin{aligned} \mathbf{g }_{\alpha }(\theta ,\phi )=\left[ \begin{array}{ccccc} g_{1}(\theta ,\phi )&\cdots&g_{\alpha }(\theta ,\phi )&\cdots&g_{(L+1)^{2}}(\theta ,\phi ) \end{array}\right] ^{{\mathsf {T}}}. \end{aligned}$$
Each of these basis functions is linearly related to spherical harmonics by the expansion
$$\begin{aligned} g_{\alpha }(\theta ,\phi ) = \mathbf{g }_{\alpha }\cdot \mathbf{y }(\theta ,\phi ). \end{aligned}$$
\(\mathbf{g }\) is produced from the spherical surface harmonic basis by multiplying \(\mathbf{y }({\theta , \phi })\) by a unitary matrix
$$\begin{aligned} \mathbf{G }^{{\mathsf {T}}}=\left[ \begin{array}{c} \mathbf{g }_{1}^{{\mathsf {T}}}\\ \vdots \\ \mathbf{g }_{(L+1)^{2}}^{{\mathsf {T}}} \end{array}\right] . \end{aligned}$$
The matrix \(\mathbf{G }\) is constructed by optimization to localize the solution over specified regions (and their complements) for a given maximum spherical harmonic degree L. The procedure determines a complete set of basis functions, which are ordered in terms of contribution to the region considered and, finally, split into the 'in' and 'out' of region sections as two distinct domains:
$$\begin{aligned} \mathbf{G }^{{\mathsf {T}}}\mathbf{y }(\theta ,\phi )=\left[ \begin{array}{c} \mathbf{G }_{{in}}^{{\mathsf {T}}}\mathbf{y }(\theta ,\phi )\\ \mathbf{G }_{{out}}^{{\mathsf {T}}}\mathbf{y }(\theta ,\phi ) \end{array}\right] =\left[ \begin{array}{c} g_{1}(\theta ,\phi )\\ \vdots \\ g_{K}(\theta ,\phi )\\ g_{K+1}(\theta ,\phi )\\ \vdots \\ g_{(L+1)^{2}}(\theta ,\phi ) \end{array}\right] =\left[ \begin{array}{c} \mathbf{g }_{\text {in}}(\theta ,\phi )\\ \mathbf{g }_{\text {out}}(\theta ,\phi ) \end{array}\right] , \end{aligned}$$
where K indicates the last element of the functions primarily concentrated in the first domain (in this case 'in'), and \(K+1\) labels the beginning function for the second ('out') domain. If an optimal decomposition has occurred, the result of summing the basis functions for the 'in' and 'out' regions will be identical to the input function and the 'in' region will be fully recreated by using all functions up to the K value with no signal outside the region of interest. An optimal decomposition is less likely when a band-limited signal is decomposed because the bandwidth, L, controls spatial resolution. Therefore, the resulting decomposition of a model with restricted spherical harmonic degree is more likely to exhibit spatial aliasing and unwanted leakage outside the region of interest.
The Slepian functions span a linear subspace of \(\mathbf{y } (\theta , \phi )\) in which the sum-squared function value–in this case, the energy—over the chosen region, R, is maximized. This 'localization' matrix is symmetric, and the subspace of maximum energy is readily obtained by eigenvalue decomposition. We compute the Gram matrix of energy in R as:
$$\begin{aligned} \mathbf{D } = \int _R \mathbf{y }(\theta , \phi ) \mathbf{y }^T (\theta ,\phi ) {\text {d}}\Omega = \int _R \left[ \begin{array}{ccc} Y_0^0Y_0^0 &{} \cdots &{} Y_0^0Y_L^L\\ \vdots &{} \ddots &{} \vdots \\ Y_0^0Y_L^L &{} \cdots &{} Y_L^LY_L^L \end{array}\right] {\text {d}}\Omega \end{aligned}$$
where the eigenvalues and eigenvectors of \(\mathbf{D }\) are defined as:
$$\begin{aligned} \mathbf{DG } = \mathbf{G }{{\varvec{\Lambda }}} \end{aligned}$$
Each column of \(\mathbf{G }\) contains one eigenvector, and \({{\varvec{\Lambda }}}\) is a diagonal matrix with the corresponding eigenvalues:
$$\begin{aligned} {{\varvec{\Lambda }}} = {\text {diag}}(\lambda _1, \ldots , \lambda _\alpha , \ldots , \lambda _{(L+1)^2}). \end{aligned}$$
Due to the symmetry of \(\mathbf{D }\), all of its eigenvalues are positive (or zero) and real, and its eigenvectors are orthogonal, which makes \(\mathbf{G }\) unitary. K, called the Shannon number, is typically chosen from inspection of the eigenvalues which contribute less than 50% to the 'in' patch (Simons et al. 2006). Note that in this study we are localizing the potentials of the flow, T and S of Eq. (1), not the flows inverted from them. The implications of this will be discussed in "Analysis of the flow separation" section.
Localization regions
To apply spherical Slepian decomposition, a region of interest R must be defined in order to separate the signal into two complementary parts. In this study, the tangentially geostrophic unambiguous regions are chosen as an area of particular interest as well as the two polar caps subtended by the intersection of the cylinder tangent to the inner core.
Tangentially geostrophic ambiguous region
The geostrophic flow assumption is one of the most frequently used within flow modeling to reduce the inherent ambiguity. Le Mouël (1984) and Hills (1979) originally proposed the constraint, which assumes that the pressure gradient, the Coriolis and the buoyancy forces balance in the Navier–Stokes (momentum) equation. Hence, by assuming that gravity is purely radial, the radial component of the thermal wind equation vanishes, giving
$$\begin{aligned} \nabla _{H} \cdot \ (\mathbf{u }_H \cos \theta ) = 0. \end{aligned}$$
Substituting this into the radial component of the frozen-flux induction equation gives
$$\begin{aligned} \dot{B}_r + \cos \theta \mathbf{u }_H \cdot \nabla _{H} (B_r / \cos \theta ) = 0 \end{aligned}$$
where \(\dot{B}_r\) is the first time derivative of the radial magnetic field (e.g., Holme 2015). The flow is unique at all points along a contour of \(B_r / \cos \theta\) that intersects the equator, but the flow elsewhere in the ambiguous patches is only determined in the direction perpendicular to the \(B_r/\cos \theta\) contours (Backus and Le Mouël 1986).
Assuming tangential geostrophy, the flow ambiguity disappears over the majority of the outer core surface except in 'ambiguous patches,' where only one component is determined. Figure 1 shows the three ambiguous patches, from contours of \(B_r/ \cos \theta\) that do not connect to the equator either directly or by saddle points. The shaded regions are the ambiguous patches and correspond to \(40\%\) of the surface area of the CMB, similar to values quoted elsewhere in the literature (e.g., Amit and Pais 2013).
Cylinder tangent to the inner core
It is believed that the flow within the polar regions of the outer core inside a cylinder tangential to the inner core (and parallel to the rotation axis, as shown in Fig. 2) is substantially different to the flow outside this zone (Aubert et al. 2013; Hollerbach and Gubbins 2007; Jones 2015; Pais and Jault 2008). One theory to explain the difference in convection within and outside the tangent cylinder is that inside the tangent cylinder the gravity and the rotation vectors are largely parallel, whereas outside they are largely perpendicular (Hollerbach and Gubbins 2007).
The tangent cylinder is closely linked to rapid changes of the magnetic field, within and along the boundaries of the spherical caps, with recent research suggesting fast-moving features can be observed with high resolution field modeling from satellite data (Finlay et al. 2016; Livermore et al. 2016). The Taylor–Proudman theorem states that rotating fluids perturbed by a solid body tend to form non-axisymmetric, z-invariant fluid columns parallel to the axis of rotation, called Taylor columns. These are thought to operate in the Earth's outer core (Glatzmaier and Roberts 1996). Aurnou et al. (2015) summarized how advanced asymptotically reduced theoretical models, efficient Cartesian direct numerical simulations and laboratory experiments show good agreement that axially coherent, helical convection columns are present and break into three-dimensional geostrophic turbulence. These dynamics are not present within the tangent cylinder. Instead, observations of the Earth's magnetic field suggest that there are anticyclonic, axisymmetric, z-variant polar vortices inside the tangent cylinder (Hulot et al. 2002; Olson and Aurnou 1999; Sreenivasan and Jones 2005). Cao et al. (2018) recently suggested three alternative mechanisms for the variations in the cylinder tangent to the inner core based on inertia-free, axisymmetric numerical simulations but concluded further work was required to conduct quantitative assessment under Earth's core conditions.
The area on the outer core surface which lies within the tangent cylinder is described by two spherical caps, each subtending a half-angle of \(21^{\circ }\) with respect to the Earth's rotation axis at the north and south poles. This investigation provides a contrast with other polar caps studies using spherical Slepian functions (Plattner and Simons 2015; Simons and Dahlen 2006). For brevity, the tangent cylinder results are included in Additional file 1.
The flow models on which the investigation of Slepian decomposition are focused come from three sources: synthetic flow models up to degree and order 60, a flow model extracted from a numerical dynamo simulation of Aubert et al. (2013), again to degree and order 60, and a steady flow model inverted from three years of satellite magnetic data to degree and order 20. With these, we seek to illustrate the capability and limitations of the scalar Slepian decomposition of core flow models, which will inform subsequent inversions of SV data for CMB flow in localized regions.
Synthetic flow models
Three sets of synthetic flow models with different kinetic energy spectral 'colors' (approximately red, white and blue) were generated to test the robustness of the decomposition methodology. Flow coefficients were randomly sampled from a normal distribution, \({\mathcal {N}}(0,1)\). In order to produce a 'flat' (white) spectrum, each coefficient was divided by its degree, l. In the decreasing (blue) power spectrum model, the coefficients were divided by \(l \sqrt{l+2}\). The coefficients of the model with increasing (red) spectral energy required no further modification.
The kinetic energy, \(E_{\mathrm{Kinetic}}\), for each degree is:
$$\begin{aligned} E_{\mathrm{Kinetic}}(l) = \frac{1}{d^2} \frac{l(l+1)}{2l+1} \sum _{m=0}^{l} \left\{ (t_l^m)^2 + (s_l^m)^2 \right\} \end{aligned}$$
where d is the radius of the surface considered (here, the CMB) and \(t_l^m\) and \(s_l^m\) are spherical harmonic coefficients representing the stream function and/or the velocity potential, i.e., corresponding to the toroidal and/or poloidal component (Le Mouël et al. 1985). Therefore, the kinetic energy for each model can be represented on a 'power spectrum' or 'degree variance' given by Eq. 15, which removes phase information and sums over all orders to give a value at each individual degree (Holme 2015).
Flow from a numerical dynamo simulation
To investigate the decomposition of a more realistic flow model (\(L=60\)), a snapshot (single time step) from the Coupled Earth numerical geodynamo model was used (J. Aubert, pers. comm., 2017). The model's spatial and temporal variations are comparable with observations of real fields and contain some Earth-like aspects of the SV (Aubert et al. 2013; Christensen et al. 2010). Further details of the geodynamo simulation and its behavior can be found in Aubert et al. (2013).
Flow inverted from satellite data
Large satellite vector data sets of the Earth's magnetic field have become available in recent years which give better spatial coverage compared to ground observatory datasets. By combining the data into a set of 'virtual observatories' (VO) in space, they can mimic ground-based observatories (Barrois et al. 2018; Beggan et al. 2009; Mandea and Olsen 2006). Improved VO calculations used sums and differences of CHAMP along-track measurements to calculate time series at 500 equally spaced VO, based on quiet data selected from all local times, corrected for external, induced and crustal fields (Hammer 2018). Data from within 200 km of the VO were reduced to the central point using a cubic expansion of the potential. VO SV data and associated \(3\times 3\) data covariance matrix for each VO were inverted for a weakly regularized flow to \(L=20\) with minimal month-to-month time variation using the algorithm described by Whaler et al. (2016). The average flow from thirty-six months between 2007.0 and 2010.0 has energy roughly equally partitioned between the toroidal and poloidal components, and between the geostrophic and ageostrophic components, in contrast to the more typical predominantly toroidal and geostrophic flows obtained with stronger regularization.
We present results from the decomposition of each of the flow models for the region representing the tangentially geostrophic unambiguous area of Fig. 1, based on synthetic data, a numerical dynamo and satellite data. The synthetic models are used as a test of the method and give insight into how the shape (i.e., color) of the energy spectrum affects the decomposition. The numerical dynamo flow model allows us to investigate the decomposition of a more realistic flow to a relatively high degree. Although the Shannon number provides the optimal value for the spectral-spatial separation, we also consider varying the number of functions, K, which we use to represent the flow inside the patches and in its complement. We show both the \(\phi\) (or west-east) and \(\theta\) (or north–south) components.
Figure 3 shows the decomposition of the three synthetic models. The Shannon number of \(K = 1504\), out of 3721 coefficients, was used. When the spatial values of the two regions (second and third rows) are added together, the sum (fourth row) creates a map almost identical to the input (first row). The absolute maximum difference between the input and the summed result of the decomposition is \(<5 \times 10^{-13}\) km/year, which can be ascribed to rounding error. The bottom row of Fig. 3 shows the power spectra of the input (red), inside (green) and outside (blue) regions. As they match exactly, the summed spectra (black dashed) overlap the input spectra (red).
Although the Slepian decomposition offers an optimal trade-off between spatial and spectral fidelity, there are a number of unavoidable side effects. The plots of the 'inside' and 'outside' ambiguous patches show two such features. Firstly, there is strong aliasing of the signal along the boundary between the regions. Secondly, the strength of the aliasing depends on the slope of the input spectrum. As can be seen from Fig. 3, more leakage occurs when the spectral energy decreases with degree (right hand column), the typical spectral slope of actual flows.
Numerical dynamo flow model
The numerical dynamo flow decomposition is shown in Figs. 4 and 5. The comparison between the input and sum of the decomposed regions shows very few differences in the spectral and spatial domains. However, when the coefficients inside and outside the ambiguous patches are plotted, strong flows appear along the boundary, particularly in the toroidal \(\phi\) (eastward) and the poloidal \(\theta\) (southward) components.
It has been noted in previous studies that the Shannon number, dependent on the proportion of the surface area in the patches and the spherical harmonic degree of the model, is not always the best parameter for separating the model into the 'inside' and 'outside' patches (Beggan et al. 2013; Plattner and Simons 2013). The effect of altering the K value on the decomposition of the dynamo simulation flow is shown in Fig. 6. Reducing K to less than the Shannon number (in this case, \(K=1504\)), the magnitude of the flow within the region is reduced along with some of the aliasing. It is only by using small values of K that the aliased signals are strongly reduced but, at these low K values, flow within the patches is a poor recreation of the input. This result suggests that although leakage within the inside and outside regions can be ameliorated, artificially large flow values along the region boundaries remain.
Core flow inverted from satellite data
The maximum degree of the flow model inverted from satellite SV data is lower than the synthetic or geodynamo models at \(L=20\). As a result, the flow features are larger scale compared to the numerical dynamo or synthetic data. For consistency only the tangentially geostrophic component of the flow was used in the decomposition.
The optimum solution to minimize the boundary leakage was found to be when the K value was 199 out of 441 coefficients, compared to a Shannon number of 178 (Figs. 7, 8). The summed coefficients again produce an exact recreation of the input coefficients, regardless of K value. This is consistent with the experiments using the numerical dynamo model, even though the bandwidth and spectral content of the two are rather different. The separation of the flow into the two parts is not ideal, as energy from the low degrees leaks across the patch boundaries, causing strong flows along the edges.
The results from synthetic flow models indicate that all models, irrespective of the slope of their energy spectrum, can be readily divided into multiple regions such that the difference between the addition of the flows from the separated regions compared to the input signal is negligible. However, while the method enables near-perfect reconstruction of the input model from the decomposed regions, there are obvious issues with aliasing/spectral leakage within the individual regions as seen in Figs. 3 to 8. The strong flows seen at the boundaries of the ambiguous patches are particularly apparent in the azimuthal component of the toroidal part of the flow reconstructed both inside and outside it.
To probe the source of the leakage, we examined how individual coefficients are split to effect the flow separation. Some input coefficients are split into a larger magnitude value for inside the patches and an oppositely signed coefficient (of similar magnitude) for outside the patches (as shown in Fig. 9). This is especially noticeable for small coefficients. The majority of the coefficients affected by this splitting are in the higher degrees (though they occur throughout the model). We tested to see whether the artificially enlarged coefficients were the cause of the leakage by plotting flow maps using only those coefficients which had greater absolute values compared to their input coefficient (Fig. 10). The maps have a very distinctive concentration of rapid flow at the region edges and are much weaker away from these boundaries. Thus, the large values produce unrealistic but oppositely directed flows along the patch edges which cancel when the flows from inside and outside patches are summed.
We next checked whether removing these coefficients produced a better decomposition of the flow models. However, the clear distinction between the inside and outside patches disappeared, and the flow within the region of interest was no longer similar to the input. Thus, all coefficients are required for an accurate decomposition of the flow models, and we cannot remove the spectral leakage by ignoring the 'poorly behaved' subset of coefficients.
We also investigated the cross-spectral leakage plots by examining the \(\mathbf{G } \mathbf{G }^T\) matrix, which should ideally be the identity matrix. The elements of \(\mathbf{G } \mathbf{G }^T\) are related to size of the region of interest, its shape, the degree resolution of the model and the truncation level of the basis. The complex shapes of the tangentially geostrophic ambiguous patches produce departures from the identity matrix, but they are no worse than those found by Beggan et al. (2013) for their crustal magnetic decomposition.
Finally, we reran the decomposition of the flow models using the ocean-continent regions for the crustal magnetic field taken from Beggan et al. (2013) and found the same aliasing along the boundaries. This suggests the cause of the leakage is a combination of the energy distribution of the flow model (predominantly red spectra for realistic models) and the use of scalar Slepian decomposition on a vector field given by spatial derivatives of potential functions expanded in spherical harmonics.
Our method was also applied to the regions inside and outside the tangent cylinder (shown in Additional file 1), represented by two caps of \(21^\circ\) half-angle (Fig. 2). As with the tangentially geostrophic ambiguous patches decomposition, large aliasing can be observed along the boundaries. As the tangent cylinder region is much smaller than the tangentially geostrophic ambiguous patches, the leakage obscures most of the flow structure within the region, making investigation of these separated flows difficult.
The results in "Decomposition of flow models" section allow some conclusions to be drawn about the influence of the tangentially geostrophic unambiguous patch on global flow models. Our experiments suggest that the spatial leakage seen in the flow maps is exaggerated by taking the spatial derivative \(\frac{{\text {d}}P^m_l(\cos \theta )}{{\text {d}}\theta }\) of the associated Legendre polynomials when considering the west–east component of the toroidal flow and the north–south component of the poloidal flow (see Eq. (2)). When plotted as vector flow maps, the leakage can be seen but is less obvious, as shown in Fig. 11.
The energy contribution of the tangentially geostrophic unambiguous patch in the satellite flow model is 41% of the toroidal component and 67% of the poloidal component input energy, which is different from the percentage surface area of the patch on the whole CMB (60%). However, the leakage at the boundaries is likely to be affecting the true distribution of kinetic energy. Hence, without further work to minimize the leakage, there remains a large uncertainty. We note that the decomposition tends to be better behaved in the Pacific Ocean, due to the lower flow complexity and magnitude in this region.
The aliasing strongly affects the analysis of the tangent cylinder flow, which Livermore et al. (2016) indicated was of particular interest due to significantly enhanced flow velocities. Spectra indicate that the tangent cylinder contains \(\sim 16\%\) of the energy of the summed decomposition flow, which is greater than the \(\sim 6\%\) of the energy in the original flow. We also note that without spherical harmonic models to higher degree it is not easy to isolate the detailed core surface flow inside or along the boundary of the tangent cylinder.
This study has highlighted the nature of the aliasing we have found when using Slepian decomposition with vectors represented by scalar potentials. These factors will likely affect direct inversions for flow from SV data into a select region, as undertaken for the crustal magnetic field by Plattner and Simons (2015). This will inform investigations of the different regional dynamics at the CMB, such as in the tangent cylinder or the large low-shear-velocity provinces (LLSVP). LLSVPs are features of low seismic velocity at the base of the mantle with distinct edges (Garnero et al. 2016). Possibilities for this seismic signal include a thermochemical pile or super-plume feature, which could be anomalously hot and/or dense. Should these structures exist, the regional differences on the top of the outer core are likely to influence the flow structures within it, making it an interesting region to study.
Using the tangentially geostrophic ambiguous patches and the tangent cylinder as example regions, we have decomposed global flow models into these spatial patches and their complements using scalar Slepian functions. We analyzed three different types of flow models including a model inverted from satellite magnetic data. However, although the flows in the region of interest and its complement successfully sum to the input, the method produces strong aliasing along the region boundaries, to the extent that flows are partially obscured.
We examined the reasons for this and conclude that further work is required to reduce spectral leakage between the regions. We intend to apply spherical Slepian decomposition to the flow treated as a vector quantity in areas of interest, rather than decomposing the globally representative scalar potential, and to undertake direct inversion of SV data for localized flow.
HR adapted and wrote new code for the experiments. CDB and KAW helped with the analysis and discussion of the results. All authors contributed to the writing of the manuscript. All authors read and approved the final manuscript.
We thank Julien Aubert for providing the Coupled Earth numerical geodynamo model snapshot, Frederik Simons for making his and co-authors' Slepian code available (doi: 10.1137/S0036144504445765, http://geoweb.princeton.edu/people/simons/software.html), and Jarno Saarimäki, for his work on previous scripts focusing on the crustal field. We also wish to thank Lara Kalnins and Ashley Smith for their assistance with the project. We thank the Hagay Amit and two anonymous reviewers for their feedback and suggestions for improvement on earlier versions of the paper.
The code and data used to generate these findings are available on request.
HFR is funded through the E3 DTP at the University of Edinburgh by NERC Grant NE/L002558/1, and a NERC CASE studentship with a British Geological Survey BUFI Grant (BGS Contract GA/17S/009).
Open AccessThis article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
40623_2019_997_MOESM1_ESM.pdf Additional file 1. Additional images to demonstrate the absolute difference between the input and the summed flow maps, and the application of this technique to another region of interest, the tangent cylinder.
School of GeoSciences, University of Edinburgh, Grant Institute, James Hutton Road, Edinburgh, EH9 3FE, UK
British Geological Society, Research Avenue South, Riccarton, Edinburgh, EH14 4AP, UK
Amit H, Pais MA (2013) Differences between tangential geostrophy and columnar flow. Geophys J Int 194(1):145–157Google Scholar
Aubert J, Finlay CC, Fournier A (2013) Bottom-up control of geomagnetic secular variation by the Earth's inner core. Nature 502(7470):219–223Google Scholar
Aurnou J, Calkins M, Cheng J, Julien K, King E, Nieves D, Soderlund K, Stellmach S (2015) Rotating convective turbulence in earth and planetary cores. Phys Earth Planet In 246:52–71Google Scholar
Backus G (1968) Kinematics of geomagnetic secular variation in a perfectly conducting core. Philos Trans R Soc Lond A Math Phys Eng Sci 263(1141):239–266Google Scholar
Backus G, Le Mouël JL (1986) The region on the core-mantle boundary where a geostrophic velocity field can be determined from frozen-flux magnetic data. Geophys J Int 85:617–628Google Scholar
Barrois O, Hammer MD, Finlay CC, Martin Y, Gillet N (2018) Assimilation of ground and satellite magnetic measurements: inference of core surface magnetic and velocity field changes. Geophys J Int 215(1):695–712Google Scholar
Beggan C, Whaler K (2010) Forecasting secular variation using core flows. Earth Planets Space 62(10):11. https://doi.org/10.5047/eps.2010.07.004 Google Scholar
Beggan C, Whaler K, Macmillan S (2009) Biased residuals of core flow models from satellite-derived 'virtual observatories'. Geophys J Int 177:463–475Google Scholar
Beggan CD, Saarimäki J, Whaler KA, Simons FJ (2013) Spectral and spatial decomposition of lithospheric magnetic field models using spherical Slepian functions. Geophys J Int 193(1):136–148Google Scholar
Cao H, Yadav RK, Aurnou JM (2018) Geomagnetic polar minima do not arise from steady meridional circulation. Proc Natl Acad Sci 115(44):11186–11191Google Scholar
Christensen UR, Aubert J, Hulot G (2010) Conditions for earth-like geodynamo models. Earth Planet Sci Lett 296(3–4):487–496Google Scholar
Dahlen F, Simons FJ (2008) Spectral estimation on a sphere in geophysics and cosmology. Geophys J Int 174(3):774–807Google Scholar
Finlay CC, Olsen N, Kotsiaros S, Gillet N, Tøffner-Clausen L (2016) Recent geomagnetic secular variation from Swarm and ground observatories as estimated in the CHAOS-6 geomagnetic field model. Earth Planets Space 68(1):112. https://doi.org/10.1186/s40623-016-0486-1 Google Scholar
Garnero E, McNamara AK, Shim SH (2016) Continent-sized anomalous zones with low seismic velocity at the base of Earth's mantle. Nat Geosci 9(7):481–489Google Scholar
Glatzmaier GA, Roberts PH (1996) Rotation and magnetism of Earth's inner core. Science 274(5294):1887–1891Google Scholar
Hammer MD (2018) Local estimation of the Earth's core magnetic field. PhD thesis. Technical University of Denmark (DTU), Kgs. Lyngby. http://orbit.dtu.dk/en/publications/local-estimation-of-the-earths-coremagnetic-field(edff864d-3a1b-42c7-9301-d0e8b3a58aec).html
Harig C, Simons FJ (2012) Mapping Greenland's mass loss in space and time. Proc Natl Acad Sci 109(49):19,934–19,937Google Scholar
Harig C, Simons FJ (2015) Accelerated West Antarctic ice mass loss continues to outpace East Antarctic gains. Earth Planet Sci Lett 415:134–141Google Scholar
Harig C, Simons FJ (2016) Ice mass loss in Greenland, the Gulf of Alaska, and the Canadian Archipelago: seasonal cycles and decadal trends. Geophys Res Lett 43(7):3150–3159Google Scholar
Hide R (1969) Interaction between the Earth's liquid core and solid Mantle. Nature 222(5198):1055–1056Google Scholar
Hills R (1979) Convection in the Earth's mantle due to viscous shear at the core-mantle interface and due to large-scale buoyancy. Ph.D. thesis, N.M. State Univ., Las CrucesGoogle Scholar
Hollerbach R, Gubbins D (2007) Inner core tangent cylinder. In: Gubbins D, Herrero-Bervera E (eds) Encyclopedia of geomagnetism and paleomagnetism. Springer, Dordrecht, pp 430–433Google Scholar
Holme R (1998) Electromagnetic core—mantle coupling—1. Explaining decadal changes in the length of day. Geophys J Int 132(1):167–180Google Scholar
Holme R (2015) Large-scale flow in the core. In: Olson P, Schubert G (eds) Core dynamics (treatise on geophysics), vol 8. Elsevier, Amsterdam, pp 107–130Google Scholar
Hulot G, Eymin C, Langlais B, Mandea M, Olsen N (2002) Small-scale structure of the geodynamo inferred from Oersted and Magsat satellite data. Nature 416(6881):620–623Google Scholar
Hulot G, Olsen N, Thébault E, Hemant K (2009) Crustal concealing of small-scale core-field secular variation. Geophys J Int 177(2):361–366Google Scholar
Jacobs JA (1987) Geomagnetism, vol 2. Academic Press, LondonGoogle Scholar
Jones CA (2015) Thermal and compositional convection in the outer core. In: Schubert G (ed) Treatise on geophysics, 2nd edn, vol 8. Elsevier, AmsterdamGoogle Scholar
Kim HR, von Frese RRB (2017) Utility of Slepian basis functions for modeling near-surface and satellite magnetic anomalies of the Australian lithosphere. Earth Planets Space 69(1):53Google Scholar
Le Mouël JL (1984) Outer-core geostrophic flow and secular variation of Earth's geomagnetic field. Nature 311:734–735Google Scholar
Le Mouël JL, Gire C, Madden T (1985) Motions at core surface in the geostrophic approximation. Phys Earth Planet In 39(4):270–287 (special Issue Irregularities in the Secular Variation and Geodynamic Implications)Google Scholar
Lesur V (2006) Introducing localized constraints in global geomagnetic field modelling. Earth Planets Space 58(4):477–483Google Scholar
Livermore PW, Hollerbach R, Finlay CC (2016) An accelerating high-latitude jet in Earth's core. Nat Geosci 10(1):62–68Google Scholar
Mandea M, Olsen N (2006) A new approach to directly determine the secular variation from magnetic satellite observations. Geophys Res Lett 33(15):1–5Google Scholar
Olson P, Aurnou J (1999) A polar vortex in the Earth's core. Nature 402:170–173Google Scholar
Pais MA, Jault D (2008) Quasi-geostrophic flows responsible for the secular variation of the Earth's magnetic field. Geophys J Int 173(2):421–443Google Scholar
Plattner A, Simons FJ (2013) Spatiospectral concentration of vector fields on a sphere. Appl Comput Harmonic Anal 36(1):1–22Google Scholar
Plattner A, Simons FJ (2015) High-resolution local magnetic field models for the Martian South Pole from Mars Global Surveyor data. J Geophys Res Planets 120(9):1543–1566Google Scholar
Roberts PH, Scott S (1965) On the analysis of the secular variation. 1. A hydromagnetic constraint: theory. J Geomag Geoelectr 17(2):137–151Google Scholar
Simons FJ (2010) Slepian functions and their use in signal estimation and spectral analysis. In: Freeden W, Nashed MZ, Sonar T (eds) Handbook of Geomathematics. Springer, Berlin, pp 891–923Google Scholar
Simons FJ, Dahlen FA (2006) Spherical Slepian functions and the polar gap in geodesy. Geophys J Int 166(3):1039–1061Google Scholar
Simons FJ, Dahlen FA, Wieczorek MA (2006) Spatiospectral concentration on a sphere. SIAM Rev 48(3):504–536Google Scholar
Simons F, Hawthorne J, Beggan C (2009) Efficient analysis and representation of geophysical processes using localized spherical basis functions. In: Proceedings of SPIE–the international society for optical engineering, p 7446Google Scholar
Sreenivasan B, Jones CA (2005) Structure and dynamics of the polar vortex in the Earth's core. Geophys Res Lett 32(20):L20301Google Scholar
Thébault E, Schott JJ, Mandea M (2006) Revised spherical cap harmonic analysis (R-SCHA): validation and properties. J Geophys Res Solid Earth 111(B1):B01102Google Scholar
Thébault E, Finlay CC, Beggan CD, Alken P, Aubert J, Barrois O, Bertrand F, Bondar T, Boness A, Brocco L, Canet E, Chambodut A, Chulliat A, Coïsson P, Civet F, Du A, Fournier A, Fratter I, Gillet N, Hamilton B, Hamoudi M, Hulot G, Jager T, Korte M, Kuang W, Lalanne X, Langlais B, Léger JM, Lesur V, Lowes FJ, Macmillan S, Mandea M, Manoj C, Maus S, Olsen N, Petrov V, Ridley V, Rother M, Sabaka TJ, Saturnino D, Schachtschneider R, Sirol O, Tangborn A, Thomson A, Tøffner-Clausen L, Vigneron P, Wardinski I, Zvereva T (2015) International geomagnetic reference field: the 12th generation. Earth Planets Space 67(1):79. https://doi.org/10.1186/s40623-015-0228-9 Google Scholar
Thébault E, Langlais B, Oliveira J, Amit H, Leclercq L (2018) A time-averaged regional model of the Hermean magnetic field. Phys Earth Planet In 276:93–105Google Scholar
Whaler K, Olsen N, Finlay C (2016) Decadal variability in core surface flows deduced from geomagnetic observatory monthly means. Geophys J Int 207(1):228–243Google Scholar
1. Geomagnetism
Recent Advances in Geo-, Paleo- and Rock- Magnetism
|
CommonCrawl
|
Determination of N* amplitudes from associated strangeness production in p+p collisions (1703.01978)
R. Münzer, L. Fabbietti, E. Epple, P. Klose, F. Hauenstein, N. Herrmann, D. Grzonka, Y. Leifels, M. Maggiora, D. Pleiner, B. Ramstein, J. Ritman, E. Roderburg, P. Salabura, A.Sarantsev, Z. Basrak, P. Buehler, M. Cargnelli, R. Caplar, H.Clement, O. Czerwiakowa, I. Deppner, M. Dzelalija W. Eyrich, Z. Fodor, P. Gasik, I.Gasparic, A. Gillitzer, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kis, P. Koczon, R. Kotte, A. Lebedev, A. Le Fevre, J.L. Liu, V. Manko, J. Marton, T. Matulewicz, K. Piasecki, F. Rami, A. Reischl, M.S. Ryu, P. Schmidt, Z. Seres, B. Sikora, K.S. Sim, K. Siwek-Wilczynska, V. Smolyankin, K. Suzuki, Z. Tyminski, P. Wagner, I. Weber, E. Widmann, K. Wisniewski, Z.G. Xiao, T. Yamasaki, I. Yushmanov, P. Wintz, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
Sept. 6, 2018 nucl-ex
We present the first determination of the energy-dependent production amplitudes of N$^{*}$ resonances with masses between 1650 MeV/c$^{2}$ and 1900 MeV/c$^{2}$ for an excess energy between $0$ and $600$ MeV. A combined Partial Wave Analysis of seven exclusively reconstructed data samples for the reaction p+p $\rightarrow pK\Lambda$ measured by the COSY-TOF, DISTO, FOPI and HADES collaborations in fixed target experiments at kinetic energies between 2.14 and 3.5 GeV is used to determine the amplitude of the resonant and non-resonant contributions.
Challenges in QCD matter physics - The Compressed Baryonic Matter experiment at FAIR (1607.01487)
CBM Collaboration: T. Ablyazimov, A. Abuhoza, R.P. Adak, M. Adamczyk, K. Agarwal, M.M. Aggarwal, Z. Ahammed, F. Ahmad, N. Ahmad, S. Ahmad, A. Akindinov, P. Akishin, E. Akishina, T. Akishina, V. Akishina, A. Akram, M. Al-Turany, I. Alekseev, E. Alexandrov, I. Alexandrov, S. Amar-Youcef, M. Anelić, O. Andreeva, C. Andrei, A. Andronic, Yu. Anisimov, H. Appelshäuser, D. Argintaru, E. Atkin, S. Avdeev, R. Averbeck, M.D. Azmi, V. Baban, M. Bach, E. Badura, S. Bähr, T. Balog, M. Balzer, E. Bao, N. Baranova, T. Barczyk, D. Bartoş, S. Bashir, M. Baszczyk, O. Batenkov, V. Baublis, M. Baznat, J. Becker, K.-H. Becker, S. Belogurov, D. Belyakov, J. Bendarouach, I. Berceanu, A. Bercuci, A. Berdnikov, Y. Berdnikov, R. Berendes, G. Berezin, C. Bergmann, D. Bertini, O. Bertini, C. Beşliu, O. Bezshyyko, P.P. Bhaduri, A. Bhasin, A.K. Bhati, B. Bhattacharjee, A. Bhattacharyya, T.K. Bhattacharyya, S. Biswas, T. Blank, D. Blau, V. Blinov, C. Blume, Yu. Bocharov, J. Book, T. Breitner, U. Brüning, J. Brzychczyk, A. Bubak, H. Büsching, T. Bus, V. Butuzov, A. Bychkov, A. Byszuk, Xu Cai, M. Cálin, Ping Cao, G. Caragheorgheopol, I. Carević, V. Cătănescu, A. Chakrabarti, S. Chattopadhyay, A. Chaus, Hongfang Chen, LuYao Chen, Jianping Cheng, V. Chepurnov, H. Cherif, A. Chernogorov, M.I. Ciobanu, G. Claus, F. Constantin, M. Csanád, N. D'Ascenzo, Supriya Das, Susovan Das, J. de Cuveland, B. Debnath, D. Dementiev, Wendi Deng, Zhi Deng, H. Deppe, I. Deppner, O. Derenovskaya, C.A. Deveaux, M. Deveaux, K. Dey, M. Dey, P. Dillenseger, V. Dobyrn, D. Doering, Sheng Dong, A. Dorokhov, M. Dreschmann, A. Drozd, A.K. Dubey, S. Dubnichka, Z. Dubnichkova, M. Dürr, L. Dutka, M. Dželalija, V.V. Elsha, D. Emschermann, H. Engel, V. Eremin, T. Eşanu, J. Eschke, D. Eschweiler, Huanhuan Fan, Xingming Fan, M. Farooq, O. Fateev, Shengqin Feng, S.P.D. Figuli, I. Filozova, D. Finogeev, P. Fischer, H. Flemming, J. Förtsch, U. Frankenfeld, V. Friese, E. Friske, I. Fröhlich, J. Frühauf, J. Gajda, T. Galatyuk, G. Gangopadhyay, C. García Chávez, J. Gebelein, P. Ghosh, S.K. Ghosh, S. Gläßel, M. Goffe, L. Golinka-Bezshyyko, V. Golovatyuk, S. Golovnya, V. Golovtsov, M. Golubeva, D. Golubkov, A. Gómez Ramírez, S. Gorbunov, S. Gorokhov, D. Gottschalk, P. Gryboś, A. Grzeszczuk, F. Guber, K. Gudima, M. Gumiński, A. Gupta, Yu. Gusakov, Dong Han, H. Hartmann, Shue He, J. Hehner, N. Heine, A. Herghelegiu, N. Herrmann, B. Heß, J.M. Heuser, A. Himmi, C. Höhne, R. Holzmann, Dongdong Hu, Guangming Huang, Xinjie Huang, D. Hutter, A. Ierusalimov, E.-M. Ilgenfritz, M. Irfan, D. Ivanischev, M. Ivanov, P. Ivanov, Valery Ivanov, Victor Ivanov, Vladimir Ivanov, A. Ivashkin, K. Jaaskelainen, H. Jahan, V. Jain, V. Jakovlev, T. Janson, Di Jiang, A. Jipa, I. Kadenko, P. Kähler, B. Kämpfer, V. Kalinin, J. Kallunkathariyil, K.-H. Kampert, E. Kaptur, R. Karabowicz, O. Karavichev, T. Karavicheva, D. Karmanov, V. Karnaukhov, E. Karpechev, K. Kasiński, G. Kasprowicz, M. Kaur, A. Kazantsev, U. Kebschull, G. Kekelidze, M.M. Khan, S.A. Khan, A. Khanzadeev, F. Khasanov, A. Khvorostukhin, V. Kirakosyan, M. Kirejczyk, A. Kiryakov, M. Kiš, I. Kisel, P. Kisel, S. Kiselev, T. Kiss, P. Klaus, R. Kłeczek, Ch. Klein-Bösing, V. Kleipa, V. Klochkov, P. Kmon, K. Koch, L. Kochenda, P. Koczoń, W. Koenig, M. Kohn, B.W. Kolb, A. Kolosova, B. Komkov, M. Korolev, I. Korolko, R. Kotte, A. Kovalchuk, S. Kowalski, M. Koziel, G. Kozlov, V. Kozlov, V. Kramarenko, P. Kravtsov, E. Krebs, C. Kreidl, I. Kres, D. Kresan, G. Kretschmar, M. Krieger, A.V. Kryanev, E. Kryshen, M. Kuc, W. Kucewicz, V. Kucher, L. Kudin, A. Kugler, Ajit Kumar, Ashwini Kumar, L. Kumar, J. Kunkel, A. Kurepin, N. Kurepin, A. Kurilkin, P. Kurilkin, V. Kushpil, S. Kuznetsov, V. Kyva, V. Ladygin, C. Lara, P. Larionov, A. Laso García, E. Lavrik, I. Lazanu, A. Lebedev, S. Lebedev, E. Lebedeva, J. Lehnert, J. Lehrbach, Y. Leifels, F. Lemke, Cheng Li, Qiyan Li, Xin Li, Yuanjing Li, V. Lindenstruth, B. Linnik, Feng Liu, I. Lobanov, E. Lobanova, S. Löchner, P.-A. Loizeau, S.A. Lone, J.A. Lucio Martínez, Xiaofeng Luo, A. Lymanets, Pengfei Lyu, A. Maevskaya, S. Mahajan, D.P. Mahapatra, T. Mahmoud, P. Maj, Z. Majka, A. Malakhov, E. Malankin, D. Malkevich, O. Malyatina, H. Malygina, M.M. Mandal, S. Mandal, V. Manko, S. Manz, A.M. Marin Garcia, J. Markert, S. Masciocchi, T. Matulewicz, L. Meder, M. Merkin, V. Mialkovski, J. Michel, N. Miftakhov, L. Mik, K. Mikhailov, V. Mikhaylov, B. Milanović, V. Militsija, D. Miskowiec, I. Momot, T. Morhardt, S. Morozov, W.F.J. Müller, C. Müntz, S. Mukherjee, C.E. Muńoz Castillo, Yu. Murin, R. Najman, C. Nandi, E. Nandy, L. Naumann, T. Nayak, A. Nedosekin, V.S. Negi, W. Niebur, V. Nikulin, D. Normanov, A. Oancea, Kunsu Oh, Yu. Onishchuk, G. Ososkov, P. Otfinowski, E. Ovcharenko, S. Pal, I. Panasenko, N.R. Panda, S. Parzhitskiy, V. Patel, C. Pauly, M. Penschuck, D. Peshekhonov, V. Peshekhonov, V. Petráček, M. Petri, M. Petriş, A. Petrovici, M. Petrovici, A. Petrovskiy, O. Petukhov, D. Pfeifer, K. Piasecki, J. Pieper, J. Pietraszko, R. Płaneta, V. Plotnikov, V. Plujko, J. Pluta, A. Pop, V. Pospisil, K. Poźniak, A. Prakash, S.K. Prasad, M. Prokudin, I. Pshenichnov, M. Pugach, V. Pugatch, S. Querchfeld, S. Rabtsun, L. Radulescu, S. Raha, F. Rami, R. Raniwala, S. Raniwala, A. Raportirenko, J. Rautenberg, J. Rauza, R. Ray, S. Razin, P. Reichelt, S. Reinecke, A. Reinefeld, A. Reshetin, C. Ristea, O. Ristea, A. Rodriguez Rodriguez, F. Roether, R. Romaniuk, A. Rost, E. Rostchin, I. Rostovtseva, Amitava Roy, Ankhi Roy, J. Rożynek, Yu. Ryabov, A. Sadovsky, R. Sahoo, P.K. Sahu, S.K. Sahu, J. Saini, S. Samanta, S.S. Sambyal, V. Samsonov, J. Sánchez Rosado, O. Sander, S. Sarangi, T. Satława, S. Sau, V. Saveliev, S. Schatral, C. Schiaua, F. Schintke, C.J. Schmidt, H.R. Schmidt, K. Schmidt, J. Scholten, K. Schweda, F. Seck, S. Seddiki, I. Selyuzhenkov, A. Semennikov, A. Senger, P. Senger, A. Shabanov, A. Shabunov, Ming Shao, A.D. Sheremetiev, Shusu Shi, N. Shumeiko, V. Shumikhin, I. Sibiryak, B. Sikora, A. Simakov, C. Simon, C. Simons, R.N. Singaraju, A.K. Singh, B.K. Singh, C.P. Singh, V. Singhal, M. Singla, P. Sitzmann, K. Siwek-Wilczyńska, L.Škoda, I. Skwira-Chalot, I. Som, Guofeng Song, Jihye Song, Z. Sosin, D. Soyk, P. Staszel, M. Strikhanov, S. Strohauer, J. Stroth, C. Sturm, R. Sultanov, Yongjie Sun, D. Svirida, O. Svoboda, A. Szabó, R. Szczygieł, R. Talukdar, Zebo Tang, M. Tanha, J. Tarasiuk, O. Tarassenkova, M.-G. Târzilă, M. Teklishyn, T. Tischler, P. Tlustý, T. Tölyhi, A. Toia, N. Topil'skaya, M. Träger, S. Tripathy, I. Tsakov, Yu. Tsyupa, A. Turowiecki, N.G. Tuturas, F. Uhlig, E. Usenko, I. Valin, D. Varga, I. Vassiliev, O. Vasylyev, E. Verbitskaya, W. Verhoeven, A. Veshikov, R. Visinka, Y.P. Viyogi, S. Volkov, A. Volochniuk, A. Vorobiev, Aleksey Voronin, Alexander Voronin, V. Vovchenko, M. Vznuzdaev, Dong Wang, Xi-Wei Wang, Yaping Wang, Yi Wang, M. Weber, C. Wendisch, J.P. Wessels, M. Wiebusch, J. Wiechula, D. Wielanek, A. Wieloch, A. Wilms, N. Winckler, M. Winter, K. Wiśniewski, Gy. Wolf, Sanguk Won, Ke-Jun Wu, J. Wüstenfeld, Changzhou Xiang, Nu Xu, Junfeng Yang, Rongxing Yang, Zhongbao Yin, In-Kwon Yoo, B. Yuldashev, I. Yushmanov, W. Zabołotny, Yu. Zaitsev, N.I. Zamiatin, Yu. Zanevsky, M. Zhalov, Yifei Zhang, Yu Zhang, Lei Zhao, Jiajun Zheng, Sheng Zheng, Daicui Zhou, Jing Zhou, Xianglei Zhu, A. Zinchenko, W. Zipper, M.Żoładź, P. Zrelov, V. Zryuev, P. Zumbruch, M. Zyzak
March 29, 2017 nucl-ex
Substantial experimental and theoretical efforts worldwide are devoted to explore the phase diagram of strongly interacting matter. At LHC and top RHIC energies, QCD matter is studied at very high temperatures and nearly vanishing net-baryon densities. There is evidence that a Quark-Gluon-Plasma (QGP) was created at experiments at RHIC and LHC. The transition from the QGP back to the hadron gas is found to be a smooth cross over. For larger net-baryon densities and lower temperatures, it is expected that the QCD phase diagram exhibits a rich structure, such as a first-order phase transition between hadronic and partonic matter which terminates in a critical point, or exotic phases like quarkyonic matter. The discovery of these landmarks would be a breakthrough in our understanding of the strong interaction and is therefore in the focus of various high-energy heavy-ion research programs. The Compressed Baryonic Matter (CBM) experiment at FAIR will play a unique role in the exploration of the QCD phase diagram in the region of high net-baryon densities, because it is designed to run at unprecedented interaction rates. High-rate operation is the key prerequisite for high-precision measurements of multi-differential observables and of rare diagnostic probes which are sensitive to the dense phase of the nuclear fireball. The goal of the CBM experiment at SIS100 (sqrt(s_NN) = 2.7 - 4.9 GeV) is to discover fundamental properties of QCD matter: the phase structure at large baryon-chemical potentials (mu_B > 500 MeV), effects of chiral symmetry, and the equation-of-state at high density as it is expected to occur in the core of neutron stars. In this article, we review the motivation for and the physics programme of CBM, including activities before the start of data taking in 2022, in the context of the worldwide efforts to explore high-density QCD matter.
Twin GEM-TPC Prototype (HGB4) Beam Test at GSI - a Development for the Super-FRS at FAIR (1612.05488)
F. Garcia, R. Turpeinen, R. Lauhakangas, E. Tuominen, J. Heino, J. Äystö, T. Grahn, S. Rinta-Antilla, A. Jokinen, R. Janik, P. Strmen, M. Pikna, B. Sitar, B. Voss, J. Kunkel, V. Kleipa, A. Gromliuk, H. Risch, I. Kaufeld, C. Caesar, C. Simon, M. kìs, A. Prochazka, C. Nociforo, S. Pietri, H. Simon, C. J. Schmidt, J. Hoffmann, I. Rusanov, N. Kurz, P. Skott, S. Minami, M. Winkler
Dec. 16, 2016 physics.ins-det
The GEM-TPC detector will be part of the standard Super-FRS detection system, as tracker detectors at several focal stations along the separator and its three branches.
Results of the ASY-EOS experiment at GSI: The symmetry energy at suprasaturation density (1608.04332)
P. Russotto, S. Gannon, S. Kupny, P. Lasko, L. Acosta, M. Adamczyk, A. Al-Ajlan, M. Al-Garawi, S. Al-Homaidhi, F. Amorini, L. Auditore, T. Aumann, Y. Ayyad, Z. Basrak, J. Benlliure, M. Boisjoli, K. Boretzky, J. Brzychczyk, A. Budzanowski, C. Caesar, G. Cardella, P. Cammarata, Z. Chajecki, M. Chartier, A. Chbihi, M. Colonna, M. D. Cozma, B. Czech, E. De Filippo, M. Di Toro, M. Famiano, I. Gašparić, L. Grassi, C. Guazzoni, P. Guazzoni, M. Heil, L. Heilborn, R. Introzzi, T. Isobe, K. Kezzar, M. Kiš, A. Krasznahorkay, N. Kurz, E. La Guidara, G. Lanzalone, A. Le Fèvre, Y. Leifels, R. C. Lemmon, Q. F. Li, I. Lombardo, J. Lukasik, W. G. Lynch, P. Marini, Z. Matthews, L. May, T. Minniti, M. Mostazo, A. Pagano, E. V. Pagano, M. Papa, P. Pawlowski, S. Pirrone, G. Politi, F. Porto, W. Reviol, F. Riccio, F. Rizzo, E. Rosato, D. Rossi, S. Santoro, D. G. Sarantites, H. Simon, I. Skwirczynska, Z. Sosin, L. Stuhl, W. Trautmann, A. Trifirò, M. Trimarchi, M. B. Tsang, G. Verde, M. Veselsky, M. Vigilante, Yongjia Wang, A. Wieloch, P. Wigg, J. Winkelbauer, H. H. Wolter, P. Wu, S. Yennello, P. Zambon, L. Zetta, M. Zoric
Sept. 27, 2016 nucl-ex
Directed and elliptic flows of neutrons and light charged particles were measured for the reaction 197Au+197Au at 400 MeV/nucleon incident energy within the ASY-EOS experimental campaign at the GSI laboratory. The detection system consisted of the Large Area Neutron Detector LAND, combined with parts of the CHIMERA multidetector, of the ALADIN Time-of-flight Wall, and of the Washington-University Microball detector. The latter three arrays were used for the event characterization and reaction-plane reconstruction. In addition, an array of triple telescopes, KRATTA, was used for complementary measurements of the isotopic composition and flows of light charged particles. From the comparison of the elliptic flow ratio of neutrons with respect to charged particles with UrQMD predictions, a value \gamma = 0.72 \pm 0.19 is obtained for the power-law coefficient describing the density dependence of the potential part in the parametrization of the symmetry energy. It represents a new and more stringent constraint for the regime of supra-saturation density and confirms, with a considerably smaller uncertainty, the moderately soft to linear density dependence deduced from the earlier FOPI-LAND data. The densities probed are shown to reach beyond twice saturation.
Performance studies of MRPC prototypes for CBM (1606.04917)
I. Deppner, N. Herrmann, J. Frühauf, M. Kiš, P. Lyu, P.-A. Loizeau, L. Shi, C. Simon, Y. Wang, B. Xie
June 13, 2016 nucl-ex, physics.ins-det
Multi-gap Resistive Plate Chambers (MRPCs) with multi-strip readout are considered to be the optimal detector candidate for the Time-of-Flight (ToF) wall in the Compressed Baryonic Matter (CBM) experiment. In the R&D phase MRPCs with different granularities, low-resistive materials and high voltage stack configurations were developed and tested. Here, we focus on two prototypes called HD-P2 and THU-strip, both with strips of 27 cm$^2$ length and low-resistive glass electrodes. The HD-P2 prototype has a single-stack configuration with 8 gaps while the THU-strip prototype is constructed in a double-stack configuration with 2 $\times$ 4 gaps. The performance results of these counters in terms of efficiency and time resolution carried out in a test beam time with heavy-ion beam at GSI in 2014 are presented in this proceeding.
Centrality dependence of subthreshold $\phi$ meson production in Ni+Ni collisions at 1.9A GeV (1602.04378)
K. Piasecki, Z. Tymiński, N. Herrmann, R. Averbeck, A. Andronic, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, E. Cordier, P. Crochet, O. Czerwiakowa, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, P. Gasik, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, M. Korolija, R. Kotte, A. Lebedev, Y. Leifels, A. Le Fèvre, J.L. Liu, X. Lopez, A. Mangiarotti, V. Manko, J. Marton, T. Matulewicz, M. Merschmeyer, R. Münzer, D. Pelte, M. Petrovici, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek-Wilczyńska, V. Smolyankin, G. Stoicea, K. Suzuki, P. Wagner, I. Weber, E. Widmann, K. Wiśniewski, Z.G. Xiao, H.S. Xu, I. Yushmanov, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
June 9, 2016 nucl-ex
We analysed the $\phi$ meson production in central Ni+Ni collisions at the beam kinetic energy of 1.93A GeV with the FOPI spectrometer and found the production probability per event of $[8.6 ~\pm~ 1.6 ~(\text{stat}) \pm 1.5 ~(\text{syst})] \times 10^{-4}$. This new data point allows for the first time to inspect the centrality dependence of the subthreshold $\phi$ meson production in heavy-ion collisions. The rise of $\phi$ meson multiplicity per event with mean number of participants can be parameterized by the power function with exponent $\alpha = 1.8 \pm 0.6$. The ratio of $\phi$ to $\text{K}^-$ production yields seems not to depend within the experimental uncertainties on the collision centrality, and the average of measured values was found to be $0.36 \pm 0.05$.
Strange meson production in Al+Al collisions at 1.9A GeV (1512.06988)
P. Gasik, K. Piasecki, N. Herrmann, Y. Leifels, T. Matulewicz, A. Andronic, R. Averbeck, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, P. Crochet, O. Czerwiakowa, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, R. Kotte, A. Lebedev, A. Le Fèvre, J.L. Liu, X. Lopez, V. Manko, J. Marton, R. Münzer, M. Petrovici, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek-Wilczyńska, V. Smolyankin, K. Suzuki, Z. Tymiński, P. Wagner, I. Weber, E. Widmann, K. Wiśniewski, Z.G. Xiao, I. Yushmanov, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
May 17, 2016 nucl-ex
The production of K$^+$, K$^-$ and $\varphi$(1020) mesons is studied in Al+Al collisions at a beam energy of 1.9A GeV which is close or below the production threshold in NN reactions. Inverse slopes, anisotropy parameters, and total emission yields of K$^{\pm}$ mesons are obtained. A comparison of the ratio of kinetic energy distributions of K$^-$ and K$^+$ mesons to the HSD transport model calculations suggests that the inclusion of the in-medium modifications of kaon properties is necessary to reproduce the ratio. The inverse slope and total yield of $\phi$ mesons are deduced. The contribution to K$^-$ production from $\phi$ meson decays is found to be [17 $\pm$ 3 (stat) $^{+2}_{-7}$ (syst)] %. The results are in line with previous K$^{\pm}$ and $\phi$ data obtained for different colliding systems at similar incident beam energies.
Time and position resolution of high granularity, high counting rate MRPC for the inner zone of the CBM-TOF wall (1605.02558)
M. Petriş, D. Bartoş, G. Caragheorgheopol, I. Deppner, J. Frühauf, N. Herrmann, M. Kiš, P-A. Loizeau, M. Petrovici, L. Rǎdulescu, V. Simion, C. Simon
May 9, 2016 hep-ex, physics.ins-det
Multi-gap RPC prototypes with readout on a multi-strip electrode were developed for the small polar angle region of the CBM-TOF subdetector, the most demanding zone in terms of granularity and counting rate. The prototypes are based on low resistivity ($\sim$10$^{10}$ $\Omega$cm) glass electrodes for performing in high counting rate environment. The strip width/pitch size was chosen such to fulfill the impedance matching with the front-end electronics and the granularity requirements of the innermost zone of the CBM-TOF wall. The in-beam tests using secondary particles produced in heavy ion collisions on a Pb target at SIS18 - GSI Darmstadt and SPS - CERN were focused on the performance of the prototype in conditions similar to the ones expected at SIS100/FAIR. An efficiency larger than 98\% and a system time resolution in the order of 70~-~80~ps were obtained in high counting rate and high multiplicity environment.
Influence of $\phi$ mesons on negative kaons in Ni+Ni collisions at 1.91A GeV beam energy (1412.4493)
K. Piasecki, N. Herrmann, R. Averbeck, A. Andronic, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, P. Crochet, O. Czerwiakowa, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, P. Gasik, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, B. Hong, T.I. Kang, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, R. Kotte, A. Lebedev, Y. Leifels, A. Le Fèvre, J.L. Liu, X. Lopez, V. Manko, J. Marton, T. Matulewicz, R. Münzer, M. Petrovici, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek-Wilczyńska, V. Smolyankin, K. Suzuki, Z. Tymiński, P. Wagner, I. Weber, E. Widmann, K. Wiśniewski, Z.G. Xiao, I. Yushmanov, Y. Zhang, A. Zhilin, V. Zinyuk, J. Zmeskal
Dec. 15, 2014 nucl-ex
$\phi$ and K$^-$ mesons from Ni+Ni collisions at the beam energy of 1.91A GeV have been measured by the FOPI spectrometer, with a trigger selecting central and semi-central events amounting to 51% of the total cross section. The phase space distributions, and the total yield of K$^-$, as well as the kinetic energy distribution and the total yield of $\phi$ mesons are presented. The $\phi$\K$^-$ ratio is found to be $0.44 \pm 0.07(\text{stat}) ^{+0.18}_{-0.12} (\text{syst})$, meaning that about 22% of K$^-$ mesons originate from the decays of $\phi$ mesons, occurring mostly in vacuum. The inverse slopes of direct kaons are up to about 15 MeV larger than the ones extracted within the one-source model, signalling that a considerable share of gap between the slopes of K$^+$ and K$^-$ could be explained by the contribution of $\phi$ mesons to negative kaons.
Azimuthal Emission Patterns of $K^{+}$ and of $ K^{-} $ Mesons in Ni + Ni Collisions near the Strangeness Production Threshold (1403.1504)
V. Zinyuk, T.I. Kang, Y. Leifels, N. Herrmann, B. Hong, R. Averbeck, A. Andronic, V. Barret, Z. Basrak, N. Bastid, M.L. Benabderrahmane, M. Berger, P. Buehler, M. Cargnelli, R. Čaplar, I. Carevic, P. Crochet, I. Deppner, P. Dupieux, M. Dželalija, L. Fabbietti, Z. Fodor, P. Gasik, I. Gašparić, Y. Grishkin, O.N. Hartmann, K.D. Hildenbrand, J. Kecskemeti, Y.J. Kim, M. Kirejczyk, M. Kiš, P. Koczon, R. Kotte, A. Lebedev, A. Le Fèvre, J.L. Liu, X. Lopez, V. Manko, J. Marton, T. Matulewicz, R. Münzer, M. Petrovici, K. Piasecki, F. Rami, A. Reischl, W. Reisdorf, M.S. Ryu, P. Schmidt, A. Schüttauf, Z. Seres, B. Sikora, K.S. Sim, V. Simion, K. Siwek- ilczyńska, V. Smolyankin, K. Suzuki, Z. Tyminski, P. Wagner, E. Widmann, K. Wiśniewski, Z.G. Xiao, I. Yushmanov, Y. Zhang, A. Zhilin, J. Zmeskal (FOPI Collaboration)
July 4, 2014 nucl-ex
Azimuthal emission patterns of $K^\pm$ mesons have been measured in Ni + Ni collisions with the FOPI spectrometer at a beam kinetic energy of 1.91 A GeV. The transverse momentum $p_{T}$ integrated directed and elliptic flow of $K^{+}$ and $K^{-}$ mesons as well as the centrality dependence of $p_{T}$ - differential directed flow of $K^{+}$ mesons are compared to the predictions of HSD and IQMD transport models. The data exhibits different propagation patterns of $K^{+}$ and $K^{-}$ mesons in the compressed and heated nuclear medium and favor the existence of a kaon-nucleon in-medium potential, repulsive for $K^{+}$ mesons and attractive for $K^{-}$ mesons.
KRATTA, a versatile triple telescope array for charged reaction products (1301.2127)
J. Łukasik, P. Pawłowski, A. Budzanowski, B. Czech, I. Skwirczyńska, J. Brzychczyk, M. Adamczyk, S. Kupny, P. Lasko, Z. Sosin, A. Wieloch, M. Kiš, Y. Leifels, W. Trautmann
Jan. 10, 2013 nucl-ex, physics.ins-det
A new detection system KRATTA, Krak\'ow Triple Telescope Array, is presented. This versatile, low threshold, broad energy range system has been built to measure the energy, emission angle, and isotopic composition of light charged reaction products. It consists of 38 independent modules which can be arranged in an arbitrary configuration. A single module, covering actively about 4.5 msr of the solid angle at the optimal distance of 40 cm from the target, consists of three identical, 0.500 mm thick, large area photodiodes, used also for direct detection, and of two CsI(1500 ppm Tl) crystals of 2.5 and 12.5 cm length, respectively. All the signals are digitally processed. The lower identification threshold, due to the thickness of the first photodiode, has been reduced to about 2.5 MeV for protons (~0.065 mm of Si equivalent) by applying a pulse shape analysis. The pulse shape analysis allowed also to decompose the complex signals from the middle photodiode into their ionization and scintillation components and to obtain a satisfactory isotopic resolution with a single readout channel. The upper energy limit for protons is about 260 MeV. The whole setup is easily portable. It performed very well during the ASY-EOS experiment, conducted in May 2011 at GSI. The structure and performance of the array are described using the results of Au+Au collisions at 400 MeV/nucleon obtained in this experiment.
The ASY-EOS experiment at GSI: investigating the symmetry energy at supra-saturation densities (1209.5961)
P. Russotto, M. Chartier, E. De Filippo, A. Le Févre, S. Gannon, I. Gašparić, M. Kiš, S. Kupny, Y. Leifels, R.C. Lemmon, J. Łukasik, P. Marini, A. Pagano, P. Pawłowski, S. Santoro, W. Trautmann, M. Veselsky, L. Acosta, M. Adamczyk, A. Al-Ajlan, M. Al-Garawi, S. Al-Homaidhi, F. Amorini, L. Auditore, T. Aumann, Y. Ayyad, V. Baran, Z. Basrak, J. Benlliure, C. Boiano, M. Boisjoli, K. Boretzky, J. Brzychczyk, A. Budzanowski, G. Cardella, P. Cammarata, Z. Chajecki, A. Chbihi, M. Colonna, D. Cozma, B. Czech, M. Di Toro, M. Famiano, E. Geraci, V. Greco, L. Grassi, C. Guazzoni, P. Guazzoni, M. Heil, L. Heilborn, R. Introzzi, T. Isobe, K. Kezzar, A. Krasznahorkay, N. Kurz, E. La Guidara, G. Lanzalone, P. Lasko, Q. Li, I. Lombardo, W. G. Lynch, Z. Matthews, L. May, T. Minniti, M. Mostazo, M. Papa, S. Pirrone, G. Politi, F. Porto, R. Reifarth, W. Reisdorf, F. Riccio, F. Rizzo, E. Rosato, D. Rossi, H. Simon, I. Skwirczynska, Z. Sosin, L. Stuhl, A. Trifiró, M. Trimarchi, M. B. Tsang, G. Verde, M. Vigilante, A. Wieloch, P. Wigg, H. H. Wolter, P. Wu, S. Yennello, P. Zambon, L. Zetta, M. Zoric
The elliptic-flow ratio of neutrons with respect to protons in reactions of neutron rich heavy-ions systems at intermediate energies has been proposed as an observable sensitive to the strength of the symmetry term in the nuclear Equation Of State (EOS) at supra-saturation densities. The recent results obtained from the existing FOPI/LAND data for $^{197}$Au+$^{197}$Au collisions at 400 MeV/nucleon in comparison with the UrQMD model allowed a first estimate of the symmetry term of the EOS but suffer from a considerable statistical uncertainty. In order to obtain an improved data set for Au+Au collisions and to extend the study to other systems, a new experiment was carried out at the GSI laboratory by the ASY-EOS collaboration in May 2011.
Systematics of azimuthal asymmetries in heavy ion collisions in the 1 A GeV regime (1112.3180)
FOPI Collaboration: W. Reisdorf, Y. Leifels, A. Andronic, R. Averbeck, V. Barret, Z. Basrak, N. Bastid, M. L. Benabderrahmane, R. Caplar, P. Crochet, P. Dupieux, M. Dzelalija, Z. Fodor, P. Gasik, Y. Grishkin, O. N. Hartmann, N. Herrmann, K. D. Hildenbrand, B. Hong, T. I. Kang, J. Kecskemeti, Y. J. Kim, M. Kirejczyk, M. Kis, P. Koczon, M. Korolija, R. Kotte, T. Kress, A. Lebedev, X. Lopez, T. Matulewicz, M. Merschmeyer, W. Neubert, M. Petrovici, K. Piasecki, F. Rami, M. S. Ryu, A. Schuettauf, Z. Seres, B. Sikora, K. S. Sim, V. Simion, K. Siwek-Wilczynska, V. Smolyankin, M. Stockmeier, G. Stoicea, Z. Tyminski, K. Wisniewski, D. Wohlfarth, Z. G. Xiao, H. S. Xu, I. Yushmanov, A. Zhilin
Using the large acceptance apparatus FOPI, we study central and semi-central collisions in the reactions (energies in A GeV are given in parentheses): 40Ca+40Ca (0.4, 0.6, 0.8, 1.0, 1.5, 1.93), 58Ni+58Ni (0.15, 0.25, 0.4), 96Ru+96Ru (0.4, 1.0, 1.5), 96Zr+96Zr (0.4, 1.0, 1.5), 129Xe+CsI (0.15, 0.25, 0.4), 197Au+197Au (0.09, 0.12, 0.15, 0.25, 0.4, 0.6, 0.8, 1.0, 1.2, 1.5). The observables include directed and elliptic flow. The data are compared to earlier data where possible and to transport model simulations. A stiff nuclear equation of state is found to be incompatible with the data. Evidence for extra-repulsion of neutrons in compressed asymmetric matter is found.
Proton-deuteron radiative capture cross sections at intermediate energies (1104.2896)
A.A. Mehmandoost-Khajeh-Dad, M. Mahjour-Shafiei, H.R. Amir-Ahmadi, J.C.S. Bacelar, A.M. van den Berg, R. Castelijns, E.D. van Garderen, N. Kalantar-Nayestanaki, M. Kiš, H. Löhner, J.G. Messchendorp, H.J. Wörtche
April 14, 2011 nucl-ex
Differential cross sections of the reaction $p(d,^3{\rm He})\gamma$ have been measured at deuteron laboratory energies of 110, 133 and 180 MeV. The data were obtained with a coincidence setup measuring both the outgoing $^3$He and the photon. The data are compared with modern calculations including all possible meson-exchange currents and two- and three- nucleon forces in the potential. The data clearly show a preference for one of the models, although the shape of the angular distribution cannot be reproduced by any of the presented models.
Systematics of central heavy ion collisions in the 1A GeV regime (1005.3418)
FOPI Collaboration: W. Reisdorf, A. Andronic, R. Averbeck, M. L. Benabderrahmane, O. N. Hartmann, N. Herrmann, K. D. Hildenbrand, T. I. Kang, Y. J. Kim, M. Kis, P. Koczon, T. Kress, Y. Leifels, M. Merschmeyer, K. Piasecki, A. Schuettauf, M. Stockmeier, V. Barret, Z. Basrak, N. Bastid, R. Caplar, P. Crochet, P. Dupieux, M. Dzelalija, Z. Fodor, P. Gasik, Y. Grishkin, B. Hong, J. Kecskemeti, M. Kirejczyk, M. Korolija, R. Kotte, A. Lebedev, X. Lopez, T. Matulewicz, W. Neubert, M. Petrovici, F. Rami, M. S. Ryu, Z. Seres, B. Sikora, K. S. Sim, V. Simion, K. Siwek-Wilczynska, V. Smolyankin, G. Stoicea, Z. Tyminski, K. Wisniewski, D. Wohlfarth, Z. G. Xiao, H. S. Xu, I. Yushmanov, A. Zhilin
Using the large acceptance apparatus FOPI, we study central collisions in the reactions (energies in A GeV are given in parentheses): 40Ca+40Ca (0.4, 0.6, 0.8, 1.0, 1.5, 1.93), 58Ni+58Ni (0.15, 0.25, 0.4), 96Ru+96Ru (0.4, 1.0, 1.5), 96Zr+96Zr (0.4, 1.0, 1.5), 129Xe+CsI (0.15, 0.25, 0.4), 197Au+197Au (0.09, 0.12, 0.15, 0.25, 0.4, 0.6, 0.8, 1.0, 1.2, 1.5). The observables include cluster multiplicities, longitudinal and transverse rapidity distributions and stopping, and radial flow. The data are compared to earlier data where possible and to transport model simulations.
Spin-isospin selectivity in three-nucleon forces (0908.1099)
H. Mardanpour, H.R. Amir-Ahmadi, R. Benard, A. Biegun, M. Eslami-Kalantari, L. Joulaeizadeh, N. Kalantar-Nayestanaki, M. Kiš, St. Kistryn, A. Kozela, H. Kuboki, Y. Maeda, M. Mahjour-Shafiei, J.G. Messchendorp, K. Miki, S. Noji, A. Ramazani-Moghaddam-Arani, H. Sakai, M. Sasano, K. Sekiguchi, E. Stephan, R. Sworst, Y. Takahashi, K. Yako
Aug. 7, 2009 nucl-ex
Precision data are presented for the break-up reaction, $^2{\rm H}(\vec p,pp)n$, within the framework of nuclear-force studies. The experiment was carried out at KVI using a polarized-proton beam of 190 MeV impinging on a liquid-deuterium target and by exploiting the detector, BINA. Some of the vector-analyzing powers are presented and compared with state-of-the-art Faddeev calculations including three-nucleon forces effect. Significant discrepancies between the data and theoretical predictions were observed for kinematical configurations which correspond to the $^2{\rm H}(\vec p,^2$He$)n$ channel. These results are compared to the $^2{\rm H}(\vec p,d)p$ reaction to test the isospin sensitivity of the present three-nucleon force models. The current modeling of two and three-nucleon forces is not sufficient to describe consistently polarization data for both isospin states.
Measurement of the in-medium K0 inclusive cross section in pi- -induced reactions at 1.15 GeV/c (0807.3361)
M. L. Benabderrahmane, N. Herrmann, K. Wisniewski, J. Kecskemeti, A. Andronic, V. Barret, Z. Basrak, N. Bastid, P. Buehler, M. Cargnelli, R. Caplar, E. Cordier, I. Deppner, P. Crochet, P. Dupieux, M. Dzelalija, L. Fabbietti, Z. Fodor, P. Gasik, I. Gasparic, Y. Grishkin, O. N. Hartmann, K. D. Hildenbrand, B. Hong, T. I. Kang, P. Kienle, M. Kirejczyk, Y. J. Kim, M. Kis, P. Koczon, M. Korolija, R. Kotte, A. Lebedev, Y. Leifels, X. Lopez, V. Manko, J. Marton, A. Mangiarotti, M. Merschmeyer, T. Matulewicz, M. Petrovici, K. Piasecki, F. Rami, A. Reischl, W. Reisdorf, M. S. Ryu, P. Schmidt, A. Schuttauf, Z. Seres, B. Sikora, K. S. Sim, V. Simion, K. Siwek-Wilczynska, V. Smolyankin, K. Suzuki, Z. Tyminski, E. Widmann, Z. G. Xiao, T. Yamazaki, I. Yushmanov, X. Y. Zhang, A. Zhilin, J. Zmeskal, E. Bratkovskaya, W. Cassing
The K0 meson production by pi- mesons of 1.15 GeV/c momentum on C, Al, Cu, Sn and Pb nuclear targets was measured with the FOPI spectrometer at the SIS accelerator of GSI. Inclusive production cross-sections and the momentum distributions of K0 mesons are compared to scaled elementary production cross-sections and to predictions of theoretical models describing the in-medium production of kaons. The data represent a new reference for those models, which are widely used for interpretation of the strangeness-production in heavy-ion collisions. The presented results demonstrate the sensitivity of the kaon production to the reaction amplitudes inside nuclei and point to the existence of a repulsive KN-potential of 20+-5 MeV at normal nuclear matter density.
Evidence of the Coulomb force effects in the cross sections of the deuteron-proton breakup at 130 MeV (nucl-ex/0607002)
St. Kistryn, E. Stephan, B. Klos, A. Biegun, K. Bodek, I. Ciepal, A. Deltuva, A. Fonseca, N. Kalantar-Nayestanaki, M. Kis, A. Kozela, M. Mahjour-Shafiei, A. Micherdzinska, P. U. Sauer, R. Sworst, J. Zejma, W. Zipper
High precision cross-section data of the deuteron-proton breakup reaction at 130 MeV deuteron energy are compared with the theoretical predictions obtained with a coupled-channel extension of the CD Bonn potential with virtual Delta-isobar excitation, without and with inclusion of the long-range Coulomb force. The Coulomb effect is studied on the basis of the cross-section data set, extended in this work to about 1500 data points by including breakup geometries characterized by small polar angles of the two protons. The experimental data clearly prefer predictions obtained with the Coulomb interaction included. The strongest effects are observed in regions in which the relative energy of the two protons is the smallest.
Systematic study of three-nucleon force effects in the cross section of the deuteron-proton breakup at 130 MeV (nucl-ex/0508012)
St. Kistryn, E. Stephan, A. Biegun, K. Bodek, A. Deltuva, E. Epelbaum, K. Ermisch, W. Gloeckle, J. Golak, N. Kalantar-Nayestanaki, H. Kamada, M. Kis, B. Klos, A. Kozela, J. Kuros-Zolnierczuk, M. Mahjour-Shafiei, U.-G. Meissner, A. Micherdzinska, A. Nogga, P. U. Sauer, R. Skibinski, R. Sworst, H. Witala, J. Zejma, W. Zipper
Aug. 11, 2005 nucl-ex
High precision cross-section data of the deuteron-proton breakup reaction at 130 MeV are presented for 72 kinematically complete configurations. The data cover a large region of the available phase space, divided into a systematic grid of kinematical variables. They are compared with theoretical predictions, in which the full dynamics of the three-nucleon (3N) system is obtained in three different ways: realistic nucleon-nucleon (NN) potentials are combined with model 3N forces (3NF's) or with an effective 3NF resulting from explicit treatment of the Delta-isobar excitation. Alternatively, the chiral perturbation theory approach is used at the next-to-next-to-leading order with all relevant NN and 3N contributions taken into account. The generated dynamics is then applied to calculate cross-section values by rigorous solution of the 3N Faddeev equations. The comparison of the calculated cross sections with the experimental data shows a clear prefernce for the predictions in which the 3NF's are included. The majority of the experimental data points is well reproduced by the theoretical predictions. The remaining discrepancies are investigated by inspecting cross sections integrated over certain kinematical variables. The procedure of global comparisons leads to establishing regularities in disagreements between the experimental data and the theoretically predicted values of the cross sections. They indicate deficiencies still present in the assumed models of the 3N system dynamics.
Spin observables in deuteron-proton radiative capture at intermediate energies (nucl-ex/0501012)
A. A. Mehmandoost-Khajeh-Dad, H. R. Amir-Ahmadi, J. C. S. Bacelar, A. M. van den Berg, R. Castelijns, A. Deltuva, E. D. van Garderen, W. Glöckle, J. Golak, N. Kalantar-Nayestanaki, H. Kamada, M. Kiš, R. Koohi-Fayegh-Dehkordi, H. Löhner, M. Mahjour-Shafiei, H. Mardanpur, J. G. Messchendorp, A. Nogga, P. Sauer, S. V. Shende, R. Skibinski, H. Witała, H. J. Wörtche
Jan. 17, 2005 nucl-ex
A radiative deuteron-proton capture experiment was carried out at KVI using polarized-deuteron beams at incident energies of 55, 66.5, and 90 MeV/nucleon. Vector and tensor-analyzing powers were obtained for a large angular range. The results are interpreted with the help of Faddeev calculations, which are based on modern two- and three-nucleon potentials. Our data are described well by the calculations, and disagree significantly with the observed tensor anomaly at RCNP.
Systematic investigation of the elastic proton-deuteron differential cross section at intermediate energies (nucl-ex/0308012)
K. Ermisch, H.R. Amir-Ahmadi, A.M. van den Berg, R. Castelijns, B. Davids, E. Epelbaum, E. Van Garderen, W. Gloeckle, J. Golak, M.N. Harakeh, M.A. de Huu, N. Kalantar-Nayestanaki, H. Kamada, M. Kis, M. Mahjour-Shafiei, A. Nogga, R. Skibinski, H. Witala, H.J. Woertche
To investigate the importance of three-nucleon forces (3NF) systematically over a broad range of intermediate energies, the differential cross sections of elastic proton-deuteron scattering have been measured at proton bombarding energies of 108, 120, 135, 150, 170 and 190 MeV at center-of-mass angles between $30^\circ$ and $170^\circ$. Comparisons with Faddeev calculations show unambiguously the shortcomings of calculations employing only two-body forces and the necessity of including 3NF. They also show the limitations of the latest few-nucleon calculations at backward angles, especially at higher beam energies. Some of these discrepancies could be partially due to relativistic effects. Data at lowest energy are also compared with a recent calculation based on \chipt.
|
CommonCrawl
|
Detecting isolated spectrum of transfer and Koopman operators with Fourier analytic tools
JCD Home
Necessary and sufficient condition for the global stability of a delayed discrete-time single neuron model
June 2014, 1(2): 233-248. doi: 10.3934/jcd.2014.1.233
Reconstructing functions from random samples
Steve Ferry 1, , Konstantin Mischaikow 2, and Vidit Nanda 3,
Department of Mathematics, Rutgers University, Piscataway, NJ 08854, United States
Rutgers University, 110 Frelinghusen Road, Piscataway, NJ 08854, United States
Department of Mathematics, University of Pennsylvania, Philadelphia, PA 19104, United States
Received May 2012 Revised May 2013 Published December 2014
From a sufficiently large point sample lying on a compact Riemannian submanifold of Euclidean space, one can construct a simplicial complex which is homotopy-equivalent to that manifold with high confidence. We describe a corresponding result for a Lipschitz-continuous function between two such manifolds. That is, we outline the construction of a simplicial map which recovers the induced maps on homotopy and homology groups with high confidence using only finite sampled data from the domain and range, as well as knowledge of the image of every point sampled from the domain. We provide explicit bounds on the size of the point samples required for such reconstruction in terms of intrinsic properties of the domain, the co-domain and the function. This reconstruction is robust to certain types of bounded sampling and evaluation noise.
Keywords: nonlinear maps, Homology, homotopy, topological inference..
Mathematics Subject Classification: Primary: 55U05, 55U10, 55U15; Secondary: 62-0.
Citation: Steve Ferry, Konstantin Mischaikow, Vidit Nanda. Reconstructing functions from random samples. Journal of Computational Dynamics, 2014, 1 (2) : 233-248. doi: 10.3934/jcd.2014.1.233
A. Bjorner, Nerves, fibers and homotopy groups,, Journal of Combinatorial Theory, 102 (2003), 88. doi: 10.1016/S0097-3165(03)00015-3. Google Scholar
K. Borsuk, On the imbedding of systems of compacta in simplicial complexes,, Fundamenta Mathematicae, 35 (1948), 217. Google Scholar
G. Carlsson, Topology and data,, Bulletin of the American Mathematical Society, 46 (2009), 255. doi: 10.1090/S0273-0979-09-01249-X. Google Scholar
J.-G. Dumas, F. Heckenbach, B. D. Saunders and V. Welker, Computing simplicial homology based on efficient Smith normal form algorithms,, Proceedings of Algebra, (2003), 177. Google Scholar
H. Edelsbrunner and J. L. Harer, Computational Topology - an Introduction,, American Mathematical Society, (2010). Google Scholar
K. Fischer, B. Gaertner and M. Kutz, Fast-smallest-enclosing-ball computation in high dimensions,, Proceedings of the $11^{th}$ Annual European Symposium on Algorithms (ESA), 2832 (2003), 630. doi: 10.1007/978-3-540-39658-1_57. Google Scholar
R. Ghrist, Three examples of applied and computational homology,, Nieuw Archief voor Wiskunde, 9 (2008), 122. Google Scholar
A. Granas and J. Dugundji, Fixed Point Theory,, Springer-Verlag, (2003). doi: 10.1007/978-0-387-21593-8. Google Scholar
S. Harker, K. Mischaikow, M. Mrozek and V. Nanda, Discrete Morse theoretic algorithms for computing homology of complexes and maps,, Foundations of Computational Mathematics, 14 (2014), 151. doi: 10.1007/s10208-013-9145-0. Google Scholar
T. Kaczynski, K. Mischaikow and M. Mrozek, Computational Homology,, Springer-Verlag, (2004). doi: 10.1007/b97315. Google Scholar
D. Kozlov, Combinatorial Algebraic Topology,, Springer, (2008). doi: 10.1007/978-3-540-71962-5. Google Scholar
J. R. Munkres, Elements of Algebraic Topology,, Addison-Wesley, (1984). Google Scholar
P. Niyogi, S. Smale and S. Weinberger, Finding the homology of submanifolds with high confidence from random samples,, Discrete and Computational Geometry, 39 (2008), 419. doi: 10.1007/s00454-008-9053-2. Google Scholar
S. Smale, A Vietoris mapping theorem for homotopy,, Proceedings of the American mathematical society, 8 (1957), 604. doi: 10.1090/S0002-9939-1957-0087106-9. Google Scholar
E. H. Spanier, Algebraic Topology,, Springer-Verlag, (1981). Google Scholar
Fabian Ziltener. Note on coisotropic Floer homology and leafwise fixed points. Electronic Research Archive, , () : -. doi: 10.3934/era.2021001
Álvaro Castañeda, Pablo González, Gonzalo Robledo. Topological Equivalence of nonautonomous difference equations with a family of dichotomies on the half line. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020278
Tian Ma, Shouhong Wang. Topological phase transition III: Solar surface eruptions and sunspots. Discrete & Continuous Dynamical Systems - B, 2021, 26 (1) : 501-514. doi: 10.3934/dcdsb.2020350
Mark F. Demers. Uniqueness and exponential mixing for the measure of maximal entropy for piecewise hyperbolic maps. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 217-256. doi: 10.3934/dcds.2020217
Claudio Bonanno, Marco Lenci. Pomeau-Manneville maps are global-local mixing. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1051-1069. doi: 10.3934/dcds.2020309
Jann-Long Chern, Sze-Guang Yang, Zhi-You Chen, Chih-Her Chen. On the family of non-topological solutions for the elliptic system arising from a product Abelian gauge field theory. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3291-3304. doi: 10.3934/dcds.2020127
Guoyuan Chen, Yong Liu, Juncheng Wei. Nondegeneracy of harmonic maps from $ {{\mathbb{R}}^{2}} $ to $ {{\mathbb{S}}^{2}} $. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3215-3233. doi: 10.3934/dcds.2019228
Magdalena Foryś-Krawiec, Jiří Kupka, Piotr Oprocha, Xueting Tian. On entropy of $ \Phi $-irregular and $ \Phi $-level sets in maps with the shadowing property. Discrete & Continuous Dynamical Systems - A, 2021, 41 (3) : 1271-1296. doi: 10.3934/dcds.2020317
Gunther Uhlmann, Jian Zhai. Inverse problems for nonlinear hyperbolic equations. Discrete & Continuous Dynamical Systems - A, 2021, 41 (1) : 455-469. doi: 10.3934/dcds.2020380
Predrag S. Stanimirović, Branislav Ivanov, Haifeng Ma, Dijana Mosić. A survey of gradient methods for solving nonlinear optimization. Electronic Research Archive, 2020, 28 (4) : 1573-1624. doi: 10.3934/era.2020115
Hua Chen, Yawei Wei. Multiple solutions for nonlinear cone degenerate elliptic equations. Communications on Pure & Applied Analysis, , () : -. doi: 10.3934/cpaa.2020272
Eduard Feireisl, Elisabetta Rocca, Giulio Schimperna, Arghir Zarnescu. Weak sequential stability for a nonlinear model of nematic electrolytes. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 219-241. doi: 10.3934/dcdss.2020366
D. R. Michiel Renger, Johannes Zimmer. Orthogonality of fluxes in general nonlinear reaction networks. Discrete & Continuous Dynamical Systems - S, 2021, 14 (1) : 205-217. doi: 10.3934/dcdss.2020346
Kimie Nakashima. Indefinite nonlinear diffusion problem in population genetics. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3837-3855. doi: 10.3934/dcds.2020169
Matthieu Alfaro, Isabeau Birindelli. Evolution equations involving nonlinear truncated Laplacian operators. Discrete & Continuous Dynamical Systems - A, 2020, 40 (6) : 3057-3073. doi: 10.3934/dcds.2020046
Abdelghafour Atlas, Mostafa Bendahmane, Fahd Karami, Driss Meskine, Omar Oubbih. A nonlinear fractional reaction-diffusion system applied to image denoising and decomposition. Discrete & Continuous Dynamical Systems - B, 2020 doi: 10.3934/dcdsb.2020321
Xuefei He, Kun Wang, Liwei Xu. Efficient finite difference methods for the nonlinear Helmholtz equation in Kerr medium. Electronic Research Archive, 2020, 28 (4) : 1503-1528. doi: 10.3934/era.2020079
Thierry Cazenave, Ivan Naumkin. Local smooth solutions of the nonlinear Klein-gordon equation. Discrete & Continuous Dynamical Systems - S, 2020 doi: 10.3934/dcdss.2020448
Zhiyan Ding, Qin Li, Jianfeng Lu. Ensemble Kalman Inversion for nonlinear problems: Weights, consistency, and variance bounds. Foundations of Data Science, 2020 doi: 10.3934/fods.2020018
Impact Factor:
PDF downloads (116)
Steve Ferry Konstantin Mischaikow Vidit Nanda
|
CommonCrawl
|
Open Section Engineering (1)
Open Section Electrical Engineering (1)
Open Section Industrial Chemistry (1)
Cellulose, Paper and Textiles (1)
Polymer Science and Technology (1)
Open Section Linguistics and Semiotics (1)
Open Section Theoretical Frameworks and Disciplines (1)
Linguistics, other (1)
Open Section Materials Sciences (2)
Biomaterials and Natural Materials (1)
Materials Sciences, other (1)
Open Section Mathematics (67)
Algebra and Number Theory (19)
Applied Mathematics (30)
Differential Equations and Dynamical Systems (2)
Discrete Mathematics (6)
General Mathematics (38)
Geometry and Topology (24)
Condensed Matter Physics (1)
Electromagnetism, Optics and Photonics (1)
Physics, other (1)
Relativity and Gravitational Physics (1)
Theoretical and Mathematical Physics (2)
From 2050 2030 2029 2028 2027 2026 2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1947 1946 1945 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1856 1855 1854 1853 1852 1851 1850 1849 1848 1847 1846 1845 1844 1843 1842 1841 1840 1839 1838 1837 1836 1835 1834 1833 1832 1831 1830 1829 1828 1827 1826 1825 1824 1823 1822 1821 1820 1819 1818 1817 1816 1815 1814 1813 1812 1811 1810 1809 1808 1807 1806 1805 1804 1803 1802 1801 1800 1799 1798 1797 1796 1795 1794 1793 1792 1791 1790 1789 1788 1787 1786 1785 1779 1777 1770 1753 1749 1678 1658 — To 2050 2030 2029 2028 2027 2026 2025 2024 2023 2022 2021 2020 2019 2018 2017 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 2004 2003 2002 2001 2000 1999 1998 1997 1996 1995 1994 1993 1992 1991 1990 1989 1988 1987 1986 1985 1984 1983 1982 1981 1980 1979 1978 1977 1976 1975 1974 1973 1972 1971 1970 1969 1968 1967 1966 1965 1964 1963 1962 1961 1960 1959 1958 1957 1956 1955 1954 1953 1952 1951 1950 1949 1948 1947 1946 1945 1944 1943 1942 1941 1940 1939 1938 1937 1936 1935 1934 1933 1932 1931 1930 1929 1928 1927 1926 1925 1924 1923 1922 1921 1920 1919 1918 1917 1916 1915 1914 1913 1912 1911 1910 1909 1908 1907 1906 1905 1904 1903 1902 1901 1900 1899 1898 1897 1896 1895 1894 1893 1892 1891 1890 1889 1888 1887 1886 1885 1884 1883 1882 1881 1880 1879 1878 1877 1876 1875 1874 1873 1872 1871 1870 1869 1868 1867 1866 1865 1864 1863 1862 1861 1860 1859 1858 1857 1856 1855 1854 1853 1852 1851 1850 1849 1848 1847 1846 1845 1844 1843 1842 1841 1840 1839 1838 1837 1836 1835 1834 1833 1832 1831 1830 1829 1828 1827 1826 1825 1824 1823 1822 1821 1820 1819 1818 1817 1816 1815 1814 1813 1812 1811 1810 1809 1808 1807 1806 1805 1804 1803 1802 1801 1800 1799 1798 1797 1796 1795 1794 1793 1792 1791 1790 1789 1788 1787 1786 1785 1779 1777 1770 1753 1749 1678 1658
Uncoded
"elliptic fibration" x
Sort by RelevanceTitle - A to ZTitle - Z to ADate - Old to RecentDate - Recent to Old
Page:1234567
Stable Higgs bundles over positive principal elliptic fibrations
Indranil Biswas, Mahan Mj, and Misha Verbitsky
principal elliptic fibrations, Math. Res. Lett. 12 (2005), 251-264.
in Complex Manifolds
Normal functions, Picard--Fuchs equations, and elliptic fibrations on K3 surfaces
Xi Chen, Charles Doran, Matt Kerr, and James D. Lewis
normal functions – a subject of increasing interest due to their recent spectacular use in open string mirror symmetry [ 24 ] – which is further amplified by explicit examples in Section 5 . The second half of the paper takes up the question of how to use the geometry of polarized K 3 $K3$ surfaces with high Picard rank to construct indecomposable cycles (Sections 5 and 6 ). Elliptic fibrations yield an extremely natural source of families of cycles, whose image under the real and transcendental regulator maps have apparently not been previously studied. Our
in Journal für die reine und angewandte Mathematik
On the density of rational points on elliptic fibrations
F. A Bogomolov and Yu Tschinkel
Let X be an algebraic variety defined over a number field F. We will say that rational points are potentially dense if there exists a finite extension K/F such that the set of K-rational points X(K) is Zariski dense in X. The main problem is to relate this property to geometric invariants of X. Hypothetically, on varieties of general type rational points are not potentially dense. In this paper we are interested in smooth projective varieties such that neither they nor their unramified coverings admit a dominant map onto varieties of general type. For these varieties it seems plausible to expect that rational points are potentially dense (see [2]).
When are Zariski chambers numerically determined?
Sławomir Rams and Tomasz Szemberg
that the support of the (non-trivial) negative part of the Zariski decomposition of every big divisor on X consists of pairwise disjoint curves. Indeed, the condition in question implies that if the intersection matrix of two irreducible negative curves C 1 , C 2 ⊂ X ${C_{1},C_{2}\subset X}$ is negative-definite, then it is diagonal. After proving Theorem 3 in Section 2 , we study the relation between elliptic fibrations and Zariski chambers on Enriques and K3 surfaces in Section 3 . It should be mentioned, that this note was motivated and inspired by the
in Forum Mathematicum
On quartics with lines of the second kind
Sławomir Rams and Matthias Schütt
Adv. Geom. 14 (2014), 735–756 Advances in Geometry DOI 10.1515 / advgeom-2014-0018 c© de Gruyter 2014 On quartics with lines of the second kind Sławomir Rams and Matthias Schütt∗ (Communicated by T. Grundhöfer) Abstract. We study the geometry of quartic surfaces in P3 that contain a line of the second kind over algebraically closed fields of characteristic different from 2, 3. In particular, we correct Segre's claims made for the complex case in 1943. Key words. Line, quartic, elliptic fibration, K3 surface. 2010 Mathematics Subject Classification. Primary: 14J28
in Advances in Geometry
Weighted Fano threefold hypersurfaces
Ivan Cheltsov and Jihun Park
J. reine angew. Math. 600 (2006), 81—116 DOI 10.1515/CRELLE.2006.087 Journal für die reine und angewandte Mathematik ( Walter de Gruyter Berlin New York 2006 Weighted Fano threefold hypersurfaces By Ivan Cheltsov at Edinburgh and Jihun Park at Pohang Abstract. We study birational transformations into elliptic fibrations and birational automorphisms of quasismooth anticanonically embedded weighted Fano 3-fold hypersur- faces with terminal singularities classified by A. R. Iano-Fletcher, J. Johnson, J. Kollár, and M. Reid. 1. Introduction Let S be a smooth cubic
On the equidistribution of some Hodge loci
Salim Tayou
} . (ii) The previous set is equidistributed in S with respect to μ. (iii) If P ∨ / P {P^{\vee}/P} has no non-trivial isotropic subgroup, then the set of points s ∈ S {s\in S} (counted with multiplicity) for which 𝒳 s {\mathcal{X}_{s}} admits an elliptic fibration of norm less than n is equidistributed with respect to μ as n tends to infinity. For the definition of a parabolic line bundle of type ( γ , n ) {(\gamma,n)} and the norm of an elliptic fibration, we refer to Definition
Simpson Jacobians of reducible curves
Ana Cristina López-Martín
give explicitly the structure of this compactified Simpson Jacobian for the following projective curves: tree-like curves and all reduced and reducible curves that can appear as Kodaira singular fibers of an elliptic fibration, that is, the fibers of types III , IV and IN with N f 2. 1. Introduction The problem of compactifying the (generalized) Jacobian of a singular curve has been studied since Igusa's work [16] around 1950. He constructed a compactification of the Jacobian of a nodal and irreducible curve X as the limit of the Jacobians of smooth curves
Semicomplete meromorphic vector fields on complex surfaces
Adolfo Guillot and Julio Rebelo
curve of poles is either a rational or an elliptic curve of null self-intersection or it has the combinatorics of a singular fiber of an elliptic fibration. This result is then globalized by proving that, always up to a birational transformation, a semi- complete meromorphic vector field on a compact complex Kähler surface must satisfy at least one of the following conditions: to be globally holomorphic, to possess a non-trivial meromorphic first integral or to preserve a fibration. In particular, this extends the results established by Brunella for complete
Classifying bases for 6D F-theory models
David Morrison and Washington Taylor
We classify six-dimensional F-theory compactifications in terms of simple features of the divisor structure of the base surface of the elliptic fibration. This structure controls the minimal spectrum of the theory. We determine all irreducible configurations of divisors ("clusters") that are required to carry nonabelian gauge group factors based on the intersections of the divisors with one another and with the canonical class of the base. All 6D F-theory models are built from combinations of these irreducible configurations. Physically, this geometric structure characterizes the gauge algebra and matter that can remain in a 6D theory after maximal Higgsing. These results suggest that all 6D supergravity theories realized in F-theory have a maximally Higgsed phase in which the gauge algebra is built out of summands of the types su(3), so(8), f4, e6, e8, e8, (g2 ⊕ su(2)); and su(2) ⊕ so(7) ⊕ su(2), with minimal matter content charged only under the last three types of summands, corresponding to the non-Higgsable cluster types identified through F-theory geometry. Although we have identified all such geometric clusters, we have not proven that there cannot be an obstruction to Higgsing to the minimal gauge and matter configuration for any possible F-theory model. We also identify bounds on the number of tensor fields allowed in a theory with any fixed gauge algebra; we use this to bound the size of the gauge group (or algebra) in a simple class of F-theory bases.
in Open Physics
|
CommonCrawl
|
How to prove that any natural number $n \geq 34$ can be written as the sum of distinct triangular numbers?
Sloane's A053614 implies that $2, 5, 8, 12, 23$, and $33$ are the only natural numbers $n \geq 1$ which cannot be written as the sum of distinct triangular numbers (i.e., numbers of the form $\binom{k}{2}$, beginning $1,3,6,10,15,\ldots$).
Question: How to prove that any natural number $n \geq 34$ can be written as the sum of distinct triangular numbers?
$34=\binom{8}{2}+\binom{4}{2}$,
$35=\binom{8}{2}+\binom{4}{2}+\binom{2}{2}$,
$36=\binom{9}{2}$,
Sloane's A061208 links to a math olympiad question (page 207) which asks to prove this for $n \leq 1997$, but the given proof is not in English, so I neither understand it, nor can I be sure if it works for all $n$.
elementary-number-theory binomial-coefficients
Rebecca J. StonesRebecca J. Stones
$\begingroup$ The result is not general, essentially explicitly giving the answer for $n \le 2819$. The final line of the solution refers to Erdős for a general approach. $\endgroup$ – Sharkos Aug 3 '15 at 8:34
$\begingroup$ The approach of Pierre Bornzstein in the linked pdf can't be generalized. He just examines all the cases. It starts with the smallest integers and uses the previous decomposition to get the following ones. $\endgroup$ – user37238 Aug 3 '15 at 8:54
This follows from a theorem of Richert:
Theorem Suppose that $k \ge 2, N \ge 0, M \ge0$ satisfy
Whenever $N<x \le N + M$, $x$ is a sum of distinct elements of some of the first $k$ elements of a set $S = \{s_1, s_2, \ldots\}$, where $s_1 < s_2 < \cdots$.
$M \ge s_{k+1}$
$2 s_i \ge s_{i+1}$
Then every integer greater than $N$ is a sum of distinct elements of $S$.
Take $k=8, N=33, M=45$ to obtain the desired result.
Proof of Theorem To prove the theorem, let $I_p = \{N+1, N+2, \ldots, N + s_{p+1}\}$. Then by assumption, all elements of $I_k$ are the sum of the first $k$ elements $s_1, \ldots, s_k$.
But now observe that if this is true for some general $p$, then $$I_p \cup \{m + s_{p+1} : m \in I_p\}$$ contains $I_{p+1}$, as a consequence of $s_{p+2} \le 2 s_{p+1}$. Hence all elements of $I_{p+1}$ are sums of the $s_1, \ldots, s_{p+1}$.
Hence inductively the result follows by considering $\bigcup_{p\ge k} I_k$, which contains all integers larger than $N$, and contains only elements which are distinct sums of $s_i$.
edited Aug 3 '15 at 9:17
SharkosSharkos
$\begingroup$ Can you explain in your theorem what is the sequence $(s_n)_{n\in \mathbb{N}}$? $\endgroup$ – user37238 Aug 3 '15 at 8:55
$\begingroup$ Done! Thanks for catching that. I think I've labelled the indices in the last part correctly now too. Let me know if I haven't! $\endgroup$ – Sharkos Aug 3 '15 at 8:56
$\begingroup$ Interestingly, the same question with primes (so using Bertrand's postulate) popped up today as well, I think $\endgroup$ – Hagen von Eitzen Aug 3 '15 at 9:02
Not the answer you're looking for? Browse other questions tagged elementary-number-theory binomial-coefficients or ask your own question.
Prove that every integer $n\geq 7$ can be expressed as a sum of distinct primes.
Every natural number can be written as the sum of distinct Fibonacci numbers?
Prove that there are infinitely many natural numbers $n$, such that $n(n+1)$ can be expressed as sum of two positive squares in two distinct ways.
can all triangle numbers that are squares be expressed as sum of squares
Which integers can be written in two different ways as a sum of $n$ distinct factorials?
Prove that either the average of the numbers $a_{1}, a_{2},…, a_{n}$
Prove that the only numbers not expressible as a sum of consecutive positive integers takes the form $2^n$ for some $n \in \mathbb N$
Prove: $m$ can be written as a sum of $2k + 1$ consecutive integers implies $2k + 1 \mid m$
How to prove that $x^2+y^2+z^2=14^n$ holds for distinct integers $x,y,z$ for every natural $n$?
|
CommonCrawl
|
How do I show that $\sum_{i = 1}^n \frac 1{\sqrt{a_n}} \lt \frac {\sqrt 3}6$ for $a_n = 4n(4n + 1)(4n + 2)$?
Let $a_n = 4n(4n + 1)(4n + 2)$, show that $$\sum_{i = 1}^n \frac 1{\sqrt{a_i}} \lt \frac {\sqrt 3}6 \quad \forall n \in \mathbb{N}^+.$$
I know I need to find an upper bound for $1/\sqrt{a_n}$ but I can't see how, especially with the square root. Any hints will be appreciated!
sequences-and-series algebra-precalculus inequality
Jack D'Aurizio
ColescuColescu
$\begingroup$ You will need quite strong inequalities to get the result as the difference between the sum and the upper bound is quite small here. We have $\sum \simeq 0.2788$ while $\sqrt{3}/6 \simeq 0.2886$ $\endgroup$ – Winther Apr 24 '16 at 15:19
$\begingroup$ $\sum=0.2611$wolframalpha.com/input/?i=sigma+1%2Fsqrt(4k*(4k%2B1)(4k%2B2)) $\endgroup$ – Takahiro Waki May 2 '16 at 19:37
We may notice that: $$ \forall n\geq 1,\qquad \frac{1}{\sqrt{4n(4n+1)(4n+2)}}\leq \frac{1}{2}\left(\frac{1}{\sqrt{4n-1}}-\frac{1}{\sqrt{4n+3}}\right) $$ hence it follows that: $$ \sum_{n=1}^{N}\frac{1}{\sqrt{4n(4n+1)(4n+2)}} < \left.\frac{1}{2\sqrt{4n-1}}\right|_{n=1}=\color{red}{\frac{1}{2\sqrt{3}}}$$ as wanted. Creative telescoping wins again.
$\begingroup$ The first inequality is quite out of the blue, how did you come up with it ? $\endgroup$ – Gabriel Romon Apr 24 '16 at 15:33
$\begingroup$ @LeGrandDODOM: I wanted to approximate the main term with something like $g(n)-g(n+1)$, in order to apply creative telescoping. Since the main term behaves like $\frac{1}{n^{3/2}}$, $g(n)$ has to behave like $\frac{1}{n^{1/2}}$, so I tried something of the form $g(n)=\frac{A}{\sqrt{4n+B}}$ and got a working inequality with some trial and error. $\endgroup$ – Jack D'Aurizio Apr 24 '16 at 15:36
$\begingroup$ This is a pretty answer (+1). $\endgroup$ – Olivier Oloa Apr 24 '16 at 15:41
$$ \begin{align} \frac12\sum_{k=1}^\infty\left(\frac1{\sqrt{4k-1}}-\frac1{\sqrt{4k+3}}\right) &=\sum_{k=1}^\infty\frac2{\sqrt{4k-1}\sqrt{4k+3}\left(\sqrt{4k-1}+\sqrt{4k+3}\right)}\tag{1}\\ &\ge\sum_{k=1}^\infty\frac1{\sqrt{4k-1}\sqrt{4k+3}\sqrt{4k+1}}\tag{2}\\ &\ge\sum_{k=1}^\infty\frac1{\sqrt{4k}\sqrt{4k+2}\sqrt{4k+1}}\tag{3} \end{align} $$ Explanation:
$(1)$: arithmetic
$(2)$: concavity of $\sqrt{x}$ says that $\frac{\sqrt{x}+\sqrt{y}}2\le\sqrt{\frac{x+y}2}$
$(3)$: $(4k-1)(4k+3)\le4k(4k+2)$ by expanding
This says that $$ \begin{align} \sum_{k=1}^\infty\frac1{\sqrt{4k(4k+1)(4k+2)}} &\le\frac12\sum_{k=1}^\infty\left(\frac1{\sqrt{4k-1}}-\frac1{\sqrt{4k+3}}\right)\\ &=\frac12\frac1{\sqrt3}\\ &=\frac{\sqrt3}6\tag{4} \end{align} $$
Since the terms of the sum are $\sim\frac18k^{-3/2}$, it is often useful to consider a telescoping series where the terms are differences of something $\sim\frac14k^{-1/2}$ because such a difference is $\sim\frac18k^{-3/2}$.
robjohn♦robjohn
$\begingroup$ I just realized that this is the same as Jack's answer, so I have added my motivation. $\endgroup$ – robjohn♦ Apr 24 '16 at 15:53
An attempt :
$$\sum_{k=2}^n \frac{1}{(4k(4k+1)(4k+2))^{1/2}}\le\sum_{k=2}^n \frac{1}{(4k)^{3/2}}\le\sum_{k=2}^\infty \frac{1}{(4k)^{3/2}}\le\int_1^\infty \frac{1}{(4x)^{3/2}}=\frac{1}{4}. $$
We only get that the sum is smaller than $\frac{1}{4}+\frac{1}{\sqrt {120}}$.
mrprottolomrprottolo
$\begingroup$ No, your proof is flawed... This inequality $\sum_{n=1}^\infty \frac{1}{(4n)^{3/2}}\le\int_1^\infty \frac{1}{(4x)^{3/2}}$ is wrong. Even if you fix it, the bound you get is not sharp enough. Integral method is not quite sharp enough here (you can make it work but it's tedious) $\endgroup$ – Gabriel Romon Apr 24 '16 at 15:15
$\begingroup$ Yes, I forgot, the term $n=1$. $\endgroup$ – mrprottolo Apr 24 '16 at 15:16
$\begingroup$ @LeGrandDODOM The integral method can be made to work. Just explicitly sum the first $N$ terms (say $N=5$) and then use the integral test to bound the remainding terms. It's not pretty but it does work. $\endgroup$ – Winther Apr 24 '16 at 15:21
$\begingroup$ @Winther $N=4$ works, but without a calculator you're not going anywhere $\endgroup$ – Gabriel Romon Apr 24 '16 at 15:22
$\begingroup$ I had the proof with $N=5$, but I did not find it pretty :) $\endgroup$ – Olivier Oloa Apr 24 '16 at 15:24
you can write inequality as $2\sqrt{3}<\sqrt{4n(4n+1)(4n+2)}$ squaring we get it as $12<4n(4n+1)(4n+2)$ this the function is continuously increasing as $n$ is positive. Also $n\in N+$ this base case is $1$ so plugging in we get $120$ this $12<120$ so it's true for akin as function is monotonic and increasing
Archis WelankarArchis Welankar
$\begingroup$ I don't understand you? Can you explain it in greater detail? $\endgroup$ – Colescu Apr 24 '16 at 14:39
$\begingroup$ Plug in value of n as $100$ then $200$ you will see the value approaches $0$ $\endgroup$ – Archis Welankar Apr 24 '16 at 14:48
$\begingroup$ But my question is not the limit of it. I need to prove that inequality and I can't see how this limit will help with that? $\endgroup$ – Colescu Apr 24 '16 at 14:48
$\begingroup$ Wait I got my mistake let me edit $\endgroup$ – Archis Welankar Apr 24 '16 at 14:55
Not the answer you're looking for? Browse other questions tagged sequences-and-series algebra-precalculus inequality or ask your own question.
Proving the convergence of $a_n = \frac{n}{n+\sqrt n}$
how prove $\sum_{n=1}^\infty\frac{a_n}{b_n+a_n} $is convergent?
Prove $\sqrt{a_n} \rightarrow \sqrt{L}$
Let $a_n=2\sqrt n-\sum_{k=1}^n\frac{1}{\sqrt{k}}$. Show $a_n$ converges and $1<\lim_{n\to\infty}a_n<2$.
Is it true that $\sum_{n = 0}^\infty \frac{a_n}{n}$ is also convergent?
How do I show that $\sum_{cyc} \frac {a^6}{b^2 + c^2} \ge \frac {abc(a + b + c)}2?$
Upper-bounding $\sum_{i=1}^n \sum_{j = i}^{i+a_i} \frac{1}{\sqrt{j}}$?
Check that $a_n = \sum_{k=1}^n \frac{1}{n+k}$ is bounded from above by $\frac 34$
Show that the following series is convergent.
Show that the sequence $a_{n+1}=\sqrt{2+a_n}$ is convergent.
|
CommonCrawl
|
4.6: Exponential and Logarithmic Models
[ "article:topic", "license:ccbysa", "showtoc:no", "authorname:lippmanrasmussen" ]
Precalculus & Trigonometry
Book: Precalculus - An Investigation of Functions (Lippman & Rasmussen)
4: Exponential and Logarithmic Functions
Contributed by David Lippman & Melonie Rasmussen
Professors (Mathematics) at Pierce College
Sourced from The OpenTextBookStore
Important Topics of this Section
While we have explored some basic applications of exponential and logarithmic functions, in this section we explore some important applications in more depth.
Radioactive Decay
In an earlier section, we discussed radioactive decay – the idea that radioactive isotopes change over time. One of the common terms associated with radioactive decay is half-life.
Definition: Half Life
The half-life of a radioactive isotope is the time it takes for half the substance to decay.
Given the basic exponential growth/decay equation \(h(t)=ab^{t}\), half-life can be found by solving for when half the original amount remains; by solving \(\frac{1}{2} a=a(b)^{t}\), or more simply \(\frac{1}{2} =b^{t}\). Notice how the initial amount is irrelevant when solving for half-life.
Bismuth-210 is an isotope that decays by about 13% each day. What is the half-life of Bismuth-210?
We were not given a starting quantity, so we could either make up a value or use an unknown constant to represent the starting amount. To show that starting quantity does not affect the result, let us denote the initial quantity by the constant a. Then the decay of Bismuth-210 can be described by the equation \(Q(d)=a(0.87)^{d}\).
To find the half-life, we want to determine when the remaining quantity is half the original: \(\frac{1}{2} a\). Solving,
\(\frac{1}{2} a=a(0.87)^{d}\) Divide by \(a\),
\(\frac{1}{2} =0.87^{d}\) Take the log of both sides
\(\log \left(\frac{1}{2} \right)=\log \left(0.87^{d} \right)\) Use the exponent property of logs
\(\log \left(\frac{1}{2} \right)=d\log \left(0.87\right)\) Divide to solve for \(d\)
\(d=\frac{\log \left(\frac{1}{2} \right)}{\log \left(0.87\right)} \approx 4.977\) days
This tells us that the half-life of Bismuth-210 is approximately 5 days.
Cesium-137 has a half-life of about 30 years. If you begin with 200 mg of cesium-137, how much will remain after 30 years? 60 years? 90 years?
Since the half-life is 30 years, after 30 years, half the original amount, 100 mg, will remain.
After 60 years, another 30 years have passed, so during that second 30 years, another half of the substance will decay, leaving 50 mg.
After 90 years, another 30 years have passed, so another half of the substance will decay, leaving 25 mg.
Cesium-137 has a half-life of about 30 years. Find the annual decay rate.
Since we are looking for an annual decay rate, we will use an equation of the form \(Q(t)=a(1+r)^{t}\). We know that after 30 years, half the original amount will remain. Using this information
\(\frac{1}{2} a=a(1+r)^{30}\) Dividing by \(a\)
\(\frac{1}{2} =(1+r)^{30}\) Taking the 30\({}^{th}\) root of both sides
\(\sqrt[{30}]{\frac{1}{2} } =1+r\) Subtracting one from both sides,
\(r=\sqrt[{30}]{\frac{1}{2} } -1\approx -0.02284\)
This tells us cesium-137 is decaying at an annual rate of 2.284% per year.
Chlorine-36 is eliminated from the body with a biological half-life of 10 days (http://www.ead.anl.gov/pub/doc/chlorine.pdf). Find the daily decay rate.
\(r = \sqrt[10]{\dfrac{1}{2} - 1 \approx -0.067\) or 6.7% is the daily rate of decay.
Carbon-14 is a radioactive isotope that is present in organic materials, and is commonly used for dating historical artifacts. Carbon-14 has a half-life of 5730 years. If a bone fragment is found that contains 20% of its original carbon-14, how old is the bone?
To find how old the bone is, we first will need to find an equation for the decay of the carbon-14. We could either use a continuous or annual decay formula, but opt to use the continuous decay formula since it is more common in scientific texts. The half life tells us that after 5730 years, half the original substance remains. Solving for the rate,
\(\frac{1}{2} a=ae^{r5730}\) Dividing by \(a\)
\(\frac{1}{2} =e^{r5730}\) Taking the natural log of both sides
\(\ln \left(\frac{1}{2} \right)=\ln \left(e^{r5730} \right)\) Use the inverse property of logs on the right side
\(\ln \left(\frac{1}{2} \right)=5730r\) Divide by 5730
\(r=\frac{\ln \left(\frac{1}{2} \right)}{5730} \approx -0.000121\)
Now we know the decay will follow the equation \(Q(t)=ae^{-0.000121t}\). To find how old the bone fragment is that contains 20% of the original amount, we solve for \(t\) so that \(Q(t) = 0.20a\).
\(0.20a=ae^{-0.000121t}\)
\(0.20=e^{-0.000121t}\)
\(\ln (0.20)=\ln \left(e^{-0.000121t} \right)\)
\(\ln (0.20)=-0.000121t\)
\(t=\frac{\ln (0.20)}{-0.000121} \approx 13301\) years
The bone fragment is about 13,300 years old.
In Example 2, we learned that Cesium-137 has a half-life of about 30 years. If you begin with 200 mg of cesium-137, will it take more or less than 230 years until only 1 milligram remains?
Less than 230 years, 229.3157 to be exact.
Doubling Time
For decaying quantities, we asked how long it takes for half the substance to decay. For growing quantities we might ask how long it takes for the quantity to double.
Definition: Doubling Time
The doubling time of a growing quantity is the time it takes for the quantity to double.
Given the basic exponential growth equation \(h(t)=ab^{t}\), doubling time can be found by solving for when the original quantity has doubled; by solving \(2a=a(b)^{x}\), or more simply \(2=b^{x}\). Like with decay, the initial amount is irrelevant when solving for doubling time.
Cancer cells sometimes increase exponentially. If a cancerous growth contained 300 cells last month and 360 cells this month, how long will it take for the number of cancer cells to double?
Defining \(t\) to be time in months, with \(t = 0\) corresponding to this month, we are given two pieces of data: this month, (0, 360), and last month, (-1, 300).
From this data, we can find an equation for the growth. Using the form \(C(t)=ab^{t}\), we know immediately a = 360, giving \(C(t)=360b^{t}\). Substituting in (-1, 300), \(\begin{array}{l} {300=360b^{-1} } \\ {300=\frac{360}{b} } \\ {b=\frac{360}{300} =1.2} \end{array}\)
This gives us the equation \(C(t)=360(1.2)^{t}\)
To find the doubling time, we look for the time when we will have twice the original amount, so when \(C(t) = 2a\).
\(2a=a(1.2)^{t}\)
\(2=(1.2)^{t}\)
\(\log \left(2\right)=\log \left(1.2^{t} \right)\)
\(\log \left(2\right)=t\log \left(1.2\right)\)
\(t=\frac{\log \left(2\right)}{\log \left(1.2\right)} \approx 3.802\) months for the number of cancer cells to double.
Use of a new social networking website has been growing exponentially, with the number of new members doubling every 5 months. If the site currently has 120,000 users and this trend continues, how many users will the site have in 1 year?
We can use the doubling time to find a function that models the number of site users, and then use that equation to answer the question. While we could use an arbitrary a as we have before for the initial amount, in this case, we know the initial amount was 120,000.
If we use a continuous growth equation, it would look like \(N(t)=120e^{rt}\), measured in thousands of users after t months. Based on the doubling time, there would be 240 thousand users after 5 months. This allows us to solve for the continuous growth rate:
\(240=120e^{r5}\)
\(2=e^{r5}\)
\(\ln 2=5r\)
\(r=\frac{\ln 2}{5} \approx 0.1386\)
Now that we have an equation, \(N(t)=120e^{0.1386t}\), we can predict the number of users after 12 months:
\(N(12) =120e^{0.1386(12)} =633.140\) thousand users.
So after 1 year, we would expect the site to have around 633,140 users.
If tuition at a college is increasing by 6.6% each year, how many years will it take for tuition to double?
Solving \(a (1 + 0.066)^t = 2a\), it will take \(t = \dfrac{log(2){log(1.066)} \approx 10.845\) years, or approximately 11 years, for tuition to double.
Newton's Law of Cooling
When a hot object is left in surrounding air that is at a lower temperature, the object's temperature will decrease exponentially, leveling off towards the surrounding air temperature. This "leveling off" will correspond to a horizontal asymptote in the graph of the temperature function. Unless the room temperature is zero, this will correspond to a vertical shift of the generic exponential decay function.
Definition: Newton's Law of Cooling
The temperature of an object, \(T\), in surrounding air with temperature \(T_{s}\) will behave according to the formula
\(T(t)=ae^{kt} +T_{s}\)
\(t\) is time
\(a\) is a constant determined by the initial temperature of the object
\(k\) is a constant, the continuous rate of cooling of the object
While an equation of the form \(T(t)=ab^{t} +T_{s}\) could be used, the continuous growth form is more common.
A cheesecake is taken out of the oven with an ideal internal temperature of 165 degrees Fahrenheit, and is placed into a 35 degree refrigerator. After 10 minutes, the cheesecake has cooled to 150 degrees. If you must wait until the cheesecake has cooled to 70 degrees before you eat it, how long will you have to wait?
Since the surrounding air temperature in the refrigerator is 35 degrees, the cheesecake's temperature will decay exponentially towards 35, following the equation
\(T(t)=ae^{kt} +35\)
We know the initial temperature was 165, so \(T(0)=165\). Substituting in these values,
\(\begin{array}{l} {165=ae^{k0} +35} \\ {165=a+35} \\ {a=130} \end{array}\)
We were given another pair of data, \(T(10)=150\), which we can use to solve for \(k\)
\(150=130e^{k10} +35\)
\(\begin{array}{l} {115=130e^{k10} } \\ {\frac{115}{130} =e^{10k} } \\ {\ln \left(\frac{115}{130} \right)=10k} \\ {k=\frac{\ln \left(\frac{115}{130} \right)}{10} =-0.0123} \end{array}\)
Together this gives us the equation for cooling: \(T(t)=130e^{-0.0123t} +35\).
Now we can solve for the time it will take for the temperature to cool to 70 degrees.
\(70=130e^{-0.0123t} +35\)
\(35=130e^{-0.0123t}\)
\(\frac{35}{130} =e^{-0.0123t}\)
\(\ln \left(\frac{35}{130} \right)=-0.0123t\)
\(t=\frac{\ln \left(\frac{35}{130} \right)}{-0.0123} \approx 106.68\)
It will take about 107 minutes, or one hour and 47 minutes, for the cheesecake to cool. Of course, if you like your cheesecake served chilled, you'd have to wait a bit longer.
A pitcher of water at 40 degrees Fahrenheit is placed into a 70 degree room. One hour later the temperature has risen to 45 degrees. How long will it take for the temperature to rise to 60 degrees?
\(T(t) = ae^{kt} + 70\). Substituting (0, 40), we find \(a = -30\). Substituting (1, 45), we solve \(45 = -30 e^{k(1)} + 70\) to get \(k = ln(\dfrac{25}{30}) = -0.1823\).
Solving 60 = -30e^{-0.1823t} + 70\) gives
\(t = \dfrac{ln(1/3){-0.1823} = 6.026\) hours.
Logarithmic Scales
Logarithmic Functions:Logarithmic Scales
For quantities that vary greatly in magnitude, a standard scale of measurement is not always effective, and utilizing logarithms can make the values more manageable. For example, if the average distances from the sun to the major bodies in our solar system are listed, you see they vary greatly.
Planet Distance (millions of km)
Venus 108
Mars 228
Jupiter 779
Saturn 1430
Uranus 2880
Neptune 4500
Placed on a linear scale – one with equally spaced values – these values get bunched up.
0 500 1000 1500 2000 2500 3000 3500 4000 4500
However, computing the logarithm of each value and plotting these new values on a number line results in a more manageable graph, and makes the relative distances more apparent.(It is interesting to note the large gap between Mars and Jupiter on the log number line. The asteroid belt is located there, which scientists believe is a planet that never formed because of the effects of the gravity of Jupiter.)
Planet Distance (millions of km) log(distance)
Mercury 58 1.76
Venus 108 2.03
Earth 150 2.18
Mars 228 2.36
Jupiter 779 2.89
Saturn 1430 3.16
Uranus 2880 3.46
Neptune 4500 3.65
Sometimes, as shown above, the scale on a logarithmic number line will show the log values, but more commonly the original values are listed as powers of 10, as shown below.
Estimate the value of point \(P\) on the log scale above
The point \(P\) appears to be half way between -2 and -1 in log value, so if \(V\) is the value of this point,
\(\log (V)\approx -1.5\) Rewriting in exponential form,
\(V\approx 10^{-1.5} =0.0316\)
Place the number 6000 on a logarithmic scale.
Since \(\log (6000)\approx 3.8\), this point would belong on the log scale about here:
Plot the data in the table below on a logarithmic scale (From http://www.epd.gov.hk/epd/noise_educ...1/intro_5.html, retrieved Oct 2, 2010).
Source of Sound/Noise Approximate Sound Pressure in \(\mu\) Pa (micro Pascals)
Launching of the Space Shuttle 2000,000,000
Full Symphony Orchestra 2000,000
Diesel Freight Train at High Speed at 25 m 200,000
Normal Conversation 20,000
Soft Whispering at 2 m in Library 2,000
Unoccupied Broadcast Studio 200
Softest Sound a human can hear 20
Notice that on the log scale above Example 8, the visual distance on the scale between points \(A\) and \(B\) and between \(C\) and \(D\) is the same. When looking at the values these points correspond to, notice \(B\) is ten times the value of \(A\), and \(D\) is ten times the value of \(C\). A visual \(linear\) difference between points corresponds to a relative (ratio) change between the corresponding values.
Logarithms are useful for showing these relative changes. For example, comparing $1,000,000 to $10,000, the first is 100 times larger than the second.
\(\dfrac{1,000,000}{10,000} = 100 = 10^2\)
Likewise, comparing $1000 to $10, the first is 100 times larger than the second.
\(\dfrac{1,000}{10} = 100 = 10^2\)
When one quantity is roughly ten times larger than another, we say it is one order of magnitude larger. In both cases described above, the first number was two orders of magnitude larger than the second.
Notice that the order of magnitude can be found as the common logarithm of the ratio of the quantities. On the log scale above, B is one order of magnitude larger than \(A\), and \(D\) is one order of magnitude larger than \(C\).
Definition: Orders of magnitude
Given two values \(A\) and \(B\), to determine how many orders of magnitude \(A\) is greater than \(B\),
Difference in orders of magnitude = log(\(\dfrac{A}{B}\)
On the log scale above Example 8, how many orders of magnitude larger is \(C\) than \(B\)?
The value \(B\) corresponds to \(10^2 = 100\)
The value \(C\) corresponds to \(10^5 = 100,000\)
The relative change is \(\dfrac{100,000}{100} = 1000 = \dfrac{10^5}{10^2} = 10^3\). The log of this value is 3.
\(C\) is three orders of magnitude greater than \(B\), which can be seen on the log scale by the visual difference between the points on the scale.
Using the table from Try it Now #5, what is the difference of order of magnitude between the softest sound a human can hear and the launching of the space shuttle?
\(\dfrac{2 \times 10^9}{2 \times 10^1} = 10^8\). The sound pressure in \(\mu\)Pa created by launching the space shuttle is 8 orders of magnitude greater than the sound pressure in \(\mu\)Pa created by the softest sound a human ear can hear.
An example of a logarithmic scale is the Moment Magnitude Scale (MMS) used for earthquakes. This scale is commonly and mistakenly called the Richter Scale, which was a very similar scale succeeded by the MMS.
Moment Magnitude Scale
For an earthquake with seismic moment \(S\), a measurement of earth movement, the MMS value, or magnitude of the earthquake, is
\(M = \dfrac{2}{3} log(\dfrac{S}{S_0})\)
Where \(S_0 = 10^{16}\) is a baseline measure for the seismic moment.
If one earthquake has a MMS magnitude of 6.0, and another has a magnitude of 8.0, how much more powerful (in terms of earth movement) is the second earthquake?
Since the first earthquake has magnitude 6.0, we can find the amount of earth movement for that quake, which we'll denote \(S_1\). The value of \(S_0\) is not particularity relevant, so we will not replace it with its value.
\(6.0 = \dfrac{2}{3} log (\dfrac{S_1}{S_0})\)
\(6.0 (\dfrac{3}{2} = log (\dfrac{S_1}{S_0})\)
\(9 = log(\dfrac{S_1}{S_0})\)
\(\dfrac{S_1}{S_0} = 10^9\)
\(S_1 = 10^9 S_0\)
This tells us the first earthquake has about \(10^9\) times more earth movement than the baseline measure.
Doing the same with the second earthquake, \(S_2\), with a magnitude of 8.0,
\(S_2 = 10^{12} S_0\)
Comparing the earth movement of the second earthquake to the first,
\(\dfrac{S_2}{S_1} = \dfrac{10^{12} S_0} {10^9 S_0} = 10^3 = 1000\)
The second value's earth movement is 1000 times as large as the first earthquake.
One earthquake has magnitude of 3.0. If a second earthquake has twice as much earth movement as the first earthquake, find the magnitude of the second quake.
Since the first quake has magnitude 3.0,
\(3.0 = \dfrac{2}{3} log (\dfrac{S}{S_0})\)
Solving for \(S\),
\(3.0 \dfrac{3}{2} = log (\dfrac{S}{S_0})\)
\(4.5 = log (\dfrac{S}{S_0})\)
\(10^{4.5} = \dfrac{S}{S_0}\)
\(S = 10^{4.5} S_0\)
Since the second earthquake has twice as much earth movement, for the second quake,
\(S = 2 \cdot 10^{4.5} S_0\)
Finding the magnitude,
\(M = \dfrac{2}{3} log (\dfrac{2 \cdot 10^{4.5} S_0}{S_0})\)
\(M = \dfrac{2}{3} log (2 \cdot 10^{4.5}) \approx 3.201\)
The second earthquake with twice as much earth movement will have a magnitude of about 3.2.
In fact, using log properties, we could show that whenever the earth movement doubles, the magnitude will increase by about 0.201:
\(M = \dfrac{2}{3} log (\dfrac{2S}{S_0}) = \dfrac{2}{3} log (2 \cdot \dfrac{S}{S_0})\)
\(M = \dfrac{2}{3} (log(2) + log(\dfrac{S}{S_0}))\)
\(M = \dfrac{2}{3} log (2) + \dfrac{2}{3} log (\dfrac{S}{S_0})\)
\(M = 0.201 + \dfrac{2}{3} log (\dfrac{S}{S_0})\)
This illustrates the most important feature of a log scale: that \(multiplying\) the quantity being considered will \(add\) to the scale value, and vice versa.
Orders of Magnitude
4.5E: Graphs of Logarithmic Functions (Exercises)
4.6E: Exponential and Logarithmic Models (Exercises)
David Lippman & Melonie Rasmussen
CC BY-SA
|
CommonCrawl
|
Combining serological and contact data to derive target immunity levels for achieving and maintaining measles elimination
Sebastian Funk ORCID: orcid.org/0000-0002-2842-34061,2,
Jennifer K. Knapp3,
Emmaculate Lebo3,
Susan E. Reef3,
Alya J. Dabbagh4,
Katrina Kretsinger4,
Mark Jit1,2,6,7,
W. John Edmunds1,2 &
Peter M. Strebel5
152 Altmetric
Vaccination has reduced the global incidence of measles to the lowest rates in history. However, local interruption of measles virus transmission requires sustained high levels of population immunity that can be challenging to achieve and maintain. The herd immunity threshold for measles is typically stipulated at 90–95%. This figure does not easily translate into age-specific immunity levels required to interrupt transmission. Previous estimates of such levels were based on speculative contact patterns based on historical data from high-income countries. The aim of this study was to determine age-specific immunity levels that would ensure elimination of measles when taking into account empirically observed contact patterns.
We combined estimated immunity levels from serological data in 17 countries with studies of age-specific mixing patterns to derive contact-adjusted immunity levels. We then compared these to case data from the 10 years following the seroprevalence studies to establish a contact-adjusted immunity threshold for elimination. We lastly combined a range of hypothetical immunity profiles with contact data from a wide range of socioeconomic and demographic settings to determine whether they would be sufficient for elimination.
We found that contact-adjusted immunity levels were able to predict whether countries would experience outbreaks in the decade following the serological studies in about 70% of countries. The corresponding threshold level of contact-adjusted immunity was found to be 93%, corresponding to an average basic reproduction number of approximately 14. Testing different scenarios of immunity with this threshold level using contact studies from around the world, we found that 95% immunity would have to be achieved by the age of five and maintained across older age groups to guarantee elimination. This reflects a greater level of immunity required in 5–9-year-olds than established previously.
The immunity levels we found necessary for measles elimination are higher than previous guidance. The importance of achieving high immunity levels in 5–9-year-olds presents both a challenge and an opportunity. While such high levels can be difficult to achieve, school entry provides an opportunity to ensure sufficient vaccination coverage. Combined with observations of contact patterns, further national and sub-national serological studies could serve to highlight key gaps in immunity that need to be filled in order to achieve national and regional measles elimination.
Measles, a highly contagious immunising infection, could be a future target for eradication [1, 2]. Since the introduction of vaccination in the early 1960s, mortality and morbidity from measles has declined drastically [3]. Nevertheless, outbreaks continue to occur, and achieving regional elimination, or interruption of transmission, has been challenging [4].
Control of measles is achieved through vaccination in early childhood, and the vaccine is part of routine immunisation schedules worldwide. In principle, a functioning health system would aim to vaccinate every child. In practice, 100% coverage with all recommended doses is never achieved. Moreover, not every administration of a vaccine confers immunity, and protection from a vaccine can wane over time. However, even if not everyone in a population is immune, the indirect protection provided by the presence of immune individuals can be sufficient to prevent outbreaks [5]. For measles, it has been shown that in a randomly mixing population, the level of immunity required to achieve this so-called "herd immunity" is in the order of 90–95% [6].
Knowledge of the level of immunity required in a population to achieve herd immunity can be used to set national vaccination targets. However, even if current levels of vaccination are high enough to achieve the level of immunisation required for herd immunity in new birth cohorts, outbreaks can occur if there are immunity gaps in older age groups. To assess the ability of a country or region to achieve and maintain elimination, that is the sustained absence of endemic transmission, immunity levels must therefore be considered across all age groups. These levels are affected by historical and current routine vaccination coverage, but also by vaccination campaigns and past outbreaks that conferred natural immunity.
For this reason, in the late 1990s, the World Health Organization (WHO) European Region (EURO) derived age-specific target immunity profiles, or the levels of immunity necessary in different age groups in order to achieve elimination [7]. These profiles are widely applied within and occasionally outside Europe to assess progress towards elimination [8–16]. Based on a basic reproduction number (or number of secondary cases produced by a typical infective in a totally susceptible population) of 11, it was recommended to ensure that at least 85% of 1–4-year-olds, 90% of 5–9-year-olds and 95% of 10-year-olds and older possess immunity against measles [17]. Unlike vaccination coverage targets, immunity targets reflect the effect of susceptibility in all age groups and highlight the potential need for campaigns to close any gaps in immunity.
The aforementioned target immunity levels derived in the late 1990s were based on assumed age-specific contact patterns matched to the pre-vaccination measles epidemiology in England and Wales. Since then, much work has gone into better quantifying the amount of transmission-relevant contact occurring between different age groups. Diary-based studies have been conducted across Europe [18, 19], as well as in Vietnam [20], China [21], Uganda [22], Zimbabwe [23] and elsewhere. While other methods for measuring social contact patterns exist [24–26], contact data from diary studies have become the de facto standard for studying age-specific infectious disease dynamics. Mathematical models of transmission based on these observed patterns have consistently outperformed those based on homogeneous mixing [27–29].
Here, we aimed to evaluate current guidelines on target immunity levels for measles taking into account contact patterns observed in diary studies. To this end, we combined the observed age-specific social mixing patterns with observed or hypothesised immunity levels to calculate contact-adjusted immunity, akin to the mean level of immunity across the population but taking into account that some age groups have more contact with each other than others. We validated this method by testing the extent to which contact-adjusted immunity levels based on nationwide serological studies conducted in different countries in the late 1990s and early 2000s could have been used to predict the case load in the following decade. We then calculated contact-adjusted immunity levels from a range of hypothetical scenarios of age-specific immunity, including previous recommended immunity levels. We assessed whether these levels would be sufficient for achieving and maintaining elimination.
Predicting elimination from seroprevalence data
We estimated population-level immunity levels from seroprevalence data using the different model variants outlined below and compared them to the number of cases experienced over 10 years using Spearman's rank correlation coefficient.
We further tested different thresholds for these levels to classify countries as being at risk of outbreaks or not. We calculated the misclassification error (MCE) as the proportion of countries that were incorrectly classified based on the given immunity threshold level and a threshold of the number of cases experienced in the 10 years following the seroprevalence study.
Immunity model: contact-adjusted vs. plain
We assumed that the force of infection λi experienced by age group i only depends on the rate of contact with the same and other age groups and the prevalence of infection in the respective age groups:
$$ \lambda_{i} = \sum_{j} \lambda_{ij} = \sum_{j} \beta_{ij} \frac{I_{j}}{N_{j}} $$
where λij is the force of infection exerted by age group j on age group i, βij is the infection rate, or the rate at which individuals in age group i contact individuals out of a total number Nj in age group j and become infected if these are infectious, and Ij is the number of infectious people in age group j. This formulation of the force of infection assumes that the rate of infection between two random individuals depends on their ages only and that the probability of a given contacted member of age group j to be with someone infectious depends on population-level prevalence of infection only.
We further write the infection rate βij as:
$$ \beta_{ij} = p_{\text{Inf}} \phi_{ij} $$
where pInf is the probability that a contact between a susceptible and infectious person leads to infection, here assumed age-independent, and ϕij is the number of contacts an individual of age group j makes with those of age group i per unit time.
The basic reproduction number R0 is defined as the mean number of new cases generated by a single infectious individual in a completely susceptible population. In a system with multiple host types (here: age groups), it can be calculated as the spectral radius (or largest eigenvalue) of the next-generation matrix (NGM) K [30]:
$$ R_{0} = \rho({\mathbf{K}}) $$
The elements of the next-generation matrix K can be written as:
$$ k_{ij} = q \phi_{ij} \frac{N_{i}}{N_{j}} $$
where q is a scale factor that, assuming that infectiousness stays constant while a person is infectious, is the probability of infection upon contact pInf multiplied with the duration of infectiousness DInf. If a proportion ri of age group i is immune, this changes the initially susceptible population from Ni to Ni(1−ri). The reproduction number for an invading infection in such a population is:
$$ R = \rho({\mathbf{K'}}) $$
where, again, ρ denotes the spectral radius and K′ is a matrix with elements:
$$ k'_{ij} = q \phi_{ij} \frac{N_{i}(1-r_{i})}{N_{j}}. $$
In classical mathematical epidemiology in a well-mixed population, the relationship between the basic reproduction number R0 and the effective reproduction number R is:
$$ R=(1-r) R_{0} $$
where r is the proportion of the population that is immune. We interpret:
$$ r' = (1-R/R_{0}) = \left(1-\frac{\rho(\mathbf{K'})}{\rho (\mathbf{K})} \right) $$
as contact-adjusted immunity, that is the equivalent of population immunity once age-specific contact patterns are taken into account. Note that q cancels out, so that calculation of contact-adjusted immunity only requires the contact matrix ϕij, population sizes Ni and immunity levels ri.
An assumption of homogeneous mixing is equivalent to assuming that ϕij=δnj, that is the rate of contact of group i being with group j depends only on an overall level of contact δ and the proportion nj=Nj/N of the population that are in group j, \(N=\sum N_{j}\) being the overall population size. This, in turn, means that the infection rate is βij=δpinfnj and the force of infection (Eq. 1) is independent of age group:
$$ \lambda_{i} = \delta p_{\text{inf}}\frac{I}{N} $$
This is equal to the force of infection in a standard SIR model with infection rate β if we set β=δpinf, that is the infection rate is equal to the rate of contact times the probability of infection upon contact between a susceptible and infectious individual.
In that case, the NGM of Eq. (4) reduces to:
$$ k_{ij} = q n_{i} \delta $$
with q=pInfDInf. This matrix has rank 1 (as all rows are equal), and its only non-zero eigenvalue is given by the trace:
$$ R_{0} = q \delta = \beta D_{\text{Inf}} $$
If the proportion immune of those in age group i is ri, the elements of K′ are:
$$ k'_{ij} = q (1 - r_{i}) n_{i} \delta $$
$$ R = \beta D_{\text{Inf}} \sum_{i} (1-r_{i}) n_{i} = r R_{0} $$
where r is the proportion of the population that is immune. We call this factor rplain immunity.
R 0 model: fixed vs. scaled
Elimination is equivalent to a situation where R<1 in Eq. 5. For a given basic reproduction number R0, this corresponds to contact adjusted immunity r being greater than a threshold level r∗ in Eq. 8.
The value of the basic reproduction number R0 would be expected to vary between settings, and this could be reflected in different values across countries [31]. Differences in contact patterns (due to factors such as cultural difference, schooling, population density or demography) would be expected to underlie such differences. It is unclear, though, whether these differences are measurable in diary studies, or whether it is masked by inherent uncertainty in these observations, as well as differences in study design and data collection. We therefore tested two interpretations of the contact matrices estimated by diary studies in order to establish this threshold.
Under the first, more conservative interpretation (fixed R0), the contact matrices were taken to capture differences in contact rates between age groups, but not differences between overall levels of contact between the countries. This is equivalent to setting R0 to be equal across countries while still allowing difference in the relative contact rates between age groups. In this case, we would expect a single threshold level of contact-adjusted immunity given by r∗=1−1/R0.
Under the second interpretation (scaled R0), we assumed R0 to scale according to the observed contact patterns in each country. In this case, every country would be expected to have a different threshold of contact-adjusted immunity depending on its value of R0, reflecting the average basic reproduction number within the country. We calculated a scaling factor c for each country such that the basic reproduction number in the country was given by:
$$ R_{0}=c \overline{R_{0}} $$
where \(\overline {R_{0}}\) is the mean basic reproduction number across countries. The factor c can be calculated as the spectral radius of a given contact matrix divided by the mean of the spectral radii across countries. Instead of working with different values of the basic reproduction number R0, we rescaled contacted-adjusted immunity in each country as:
$$ r'=1-(1-r)c $$
With this formulation, we would again expect a single threshold of scaled contact-adjusted immunity given by \({r'}^{*}=1-1/\overline {R_{0}}\).
Vaccination model: projected vs. ignored
Seroprevalence studies only provide a single, cross-sectional snapshot of immunity in a population. Following such a study, vaccination uptake, natural immunity and ageing combine to change the age-specific immunity levels. We compared a model where vaccination was ignored and the measured seroprevalence taken as fixed over the 10-year time period to one where we used an average of projected immunity levels, which were updated using information on vaccination uptake in the years following the seroprevalence study. In principle, updating immunity levels with measured vaccination coverage and wild-type measles circulation should improve estimates of population-level immunity. In practice, this relies on accurate measurements of both vaccination coverage and case numbers as well as modelling decisions on assumed vaccine efficacy, maternal immunity and distribution of multiple doses (e.g. randomly vs. preferentially to children that have already received a dose), which could mask any gains made from having up-to-date immunity estimates.
Here, we focused on added immunity due to vaccination and assumed that the added immunity due to wild-type measles circulation was negligible. Serological samples from under-1-year-olds were only available from 7 of the 17 countries in the ESEN2 study, and the number of samples from each country is too small to produce good estimates (676 samples in total); we combined all these samples to produce an overall estimate of maternal immunity of approximately 40% amongst under-1-year-olds. We assumed that immunity in the age group that contained the scheduled age of the first dose of measles was given by a country-specific scaling factor multiplied with the reported coverage in that year. This factor would reflect the proportion of children in that age group immunised at any point in time, as a fraction of the ones immunised by the time of departure from the age group. The factor was estimated by comparing the observed seroprevalence with the level of coverage reported in that year. For any subsequent doses, we assumed that the vaccine was preferentially given to those that had received a previous dose or doses of the vaccine, as could be estimated from the reported coverage at the time children in that cohort would have been eligible for the previous dose(s). We assumed a vaccine efficacy per dose of 95% [32].
Contact matrices
We established contact matrices from diary studies conducted in a range of different settings using a bootstrap, randomly sampling P individuals with replacement from the P participants of a contact survey. We then determined a weighted average dij of the number of contacts in different age groups j made by participants of each age group i, giving weekday contacts 5/2 times the weight of weekend contacts. We further obtained symmetric matrices, i.e. ones fulfilling cijni=cjinj by rescaling:
$$ c_{ij} = \frac{1}{2}\frac{1}{n_{i}}\left(d_{ij}n_{i} + d_{ji}n_{j} \right) $$
This gave the elements of the contact matrix ϕij=cij/T, scaled by the time period T over which contacts were measured (usually 24 h).
We considered the annual number of measles cases reported by each country to WHO. We used serological studies conducted in 17 countries of the WHO EURO as part of the European Sero-Epidemiology Network 2 (ESEN2) project to determine immunity levels at the times of the studies [10]. Equivocal samples were interpreted as positive as in the original study, but we also tested scenarios where they were removed from the sample or interpreted as negative. We took into account uncertainty by drawing from the individual samples using a bootstrap (n=1000) and using the re-sampled immunity levels with re-sampled contact matrices to estimate contact-adjusted immunity. We ensured visually that the number of bootstrap samples chosen produced stable mean estimates of contact-adjusted immunity levels (see Additional file 1: Figure S1). Since contact studies were not available for all countries in ESEN2, contact studies from representative countries were used where necessary (for mediterranean countries, Italy; for Eastern European countries, Poland; for Sweden, Finland; for Ireland, Great Britain).
We used diary studies available on the Zenodo Social Contact Data Repository (https://zenodo.org/communities/social_contact_data), to determine contact matrices for 17 countries and the Hong Kong Special Administrative Region of China [33–37], a study conducted in Uganda [22] and a further study conducted in five countries of South East Asia.
All computations were done with the R statistical computing language [38]. Contact matrices were calculated using the contact_matrix function in the socialmixr package [39], and contact-adjusted immunity calculated using the adjust_immunity function in the epimixr package [40].
Contact-adjusted immunity levels from serological studies
We first tested the ability of nationwide seroprevalence studies to predict the cases in the decade following, using different definitions of population-level immunity. Overall, the 17 countries that took part in the ESEN2 study in the early 2000s reported 59,494 measles cases to WHO in the 10 years following the study. The number of cases experienced by individual countries varied widely (Fig. 1 and Table 1). Slovakia, where measles was declared eliminated in 1999, only reported a total of 2 cases (both in 2004) in these 10 years. Bulgaria, on the other hand, reported over 20,000 cases, largely as part of a large outbreak in 2009/10.
Maximum number of cases in a year out of the 10 years following the ESEN2 study, in cases per million inhabitants, on a logarithmic scale. Numbers at the top of the bars are the total number of cases reported in the year with most cases. The dotted vertical line indicates the threshold delineation between countries that did (right) or did not (left) experience large outbreaks when testing the ability of population-level immunity metrics to predict either
Table 1 Measles cases in the 10 years following the ESEN2 serological study, and mean estimated population immunity (contact-adjusted or not, with fixed R0 and equivocal samples interpreted as positive) based on the study and adjusted for vaccination uptake
Comparing the immunity levels with the mean number of annual measles cases in the 10-year period yielded the expected negative correlation with most models (Table 2). Contact-adjusted immunity levels estimated based on the serological profiles were better correlated with the case load than plain immunity levels. Further, interpreting equivocal samples as positive yielded the best correlation, but scaling R0 according to measured contacts did not improve correlations compared to using a fixed R0. Projecting national vaccination uptake in the years following the serological surveys onto the observed immunity levels yielded better correlations than just using the snapshots of seroprevalence. For the remaining analyses, we therefore used a fixed R0, interpreted equivocal samples as positive, and corrected immunity levels with vaccination uptake. The resulting immunity levels for the 17 countries in the ESEN2 study are shown in the rightmost two columns of Table 1.
Table 2 Spearman's rank correlation between immunity estimated from nationwide serology and (if contact-adjusted) contact studies on the one hand and the mean number of cases in the 10 years following the studies on the other
The best model had correlation of − 0.53 (Spearman's rank correlation, 90% credible interval (CI) − 0.59–(− 0.46)) for contact-adjusted immunity and − 0.29 (90% CI − 0.36–(− 0.21)) for plain immunity. Notable outliers in the correlation between immunity levels and case load were Latvia (contact-adjusted immunity 71%, plain 82%, 16 cases over 10 years) in one direction (low estimated immunity but no outbreaks), and Spain (contact-adjusted immunity 95%, plain 98%, >3000 cases) and Israel (contact-adjusted immunity 94%, plain 95%, >1500 cases) in the other (high estimated immunity but outbreaks).
To test the predictive ability of estimated seroprevalence levels in combination with age-specific mixing, we split the countries into those that experienced large outbreaks in the 10 years following the serological studies and those that did not. We set the threshold at an average of 5 per million or, equivalently, a maximum annual cases of 20 per million (see dashed line in Fig. 1). We then tested different threshold immunity levels (ranging from 80% to 99%, in increments of 1%) and classified countries as being at risk of outbreaks or not based on whether their estimated immunity levels fell below the threshold or not.
The threshold of contact-adjusted immunity yielding best predictions was 93%, in which case about 70% of countries were correctly classified (Fig. 2). With plain immunity, this level is at 94%, and the corresponding MCE is greater than with contact-adjusted immunity. More generally, the behaviour of the MCE as a function of threshold level was more erratic when considering plain instead of contact-adjusted immunity. In assessing elimination prospects below, we used the threshold value of 93%.
Misclassification error (MCE) as a function of the threshold level of for contact-adjusted or plain immunity. Dots give the mean MCE at the tested threshold levels, connected by a line to guide the eye. The grey shades indicate a standard deviation around the mean (uncertainty coming from both the serological sample and from the contact sample)
We investigated contact-adjusted immunity under previously recommended target immunity levels (85% in under-5-year-olds, 90% in 5–9-year-olds and 95% in all older age groups) in the settings for which we had access to contact studies (17 countries and Hong Kong, Fig. 3a). We used the identified threshold level of 93% as an indicator of being at risk of outbreaks, implying that the mean R0 value we found predictive of outbreaks in Europe was a good estimate elsewhere. In this scenario, 5 out of 18 settings had greater than 10% probability of adjusted immunity levels lower than the 93% level found to best identify countries at risk of outbreaks: Taiwan (probability 95%), The Netherlands (90%), Peru (68%), Uganda (63%) and the UK (40%).
Contact-adjusted immunity in different theoretical scenarios, with age-specific mixing as measured in diary studies. Each column represents one of the scenarios of age-specific immunity (top), with differences between the settings given by their different mixing patterns. Scenarios from left to right: a Current target levels. b 5% higher immunity in under 5-year-olds. c 5% higher immunity in 5–9-year-olds. d 5% lower immunity in 10–14-year-olds. e 5% higher immunity in 5–9-year-olds and 5% lower immunity in 15–19-year-olds
With alternative scenarios, the reproduction numbers changed (Fig. 3 (b–e)). Raising immunity in under-5-year-olds by 5 to 90% would increase adjusted immunity levels only slightly, with 4 out of the 5 countries (exception: Uganda) at risk under current target immunity levels still at greater than 20% risk. On the other hand, raising immunity in 5-to-9-year-olds by 5 to 95% would sharply increase contact-adjusted immunity. In this scenario, all countries would have 5% or less probability of being at risk of outbreaks, with 16 out of 18 at less than 1% risk (exceptions: Hong Kong 5%, Netherlands 3%).
In scenarios where immunity in 5-to-9-year-olds was raised but a gap in immunity was introduced in older generations, contact-adjusted immunity dropped below the threshold level of 93% in some settings. A scenario of reduced immunity in 10-to-14-year-olds by 5 to 90% while retaining higher immunity in younger age groups resulted in elevated risks of outbreaks in 13 out of 18 countries. A scenario of reduced immunity in 14-to-19-year-olds by 5 to 90% while retaining higher immunity in younger age groups resulted in elevated risks of outbreaks in 11 out of 18 countries.
Taking into account age-specific mixing patterns and applying these to immunity levels observed across Europe, we were better able to predict outbreaks than by considering immunity alone. Combined with previous evidence that using observed age-specific mixing improves the accuracy of mathematical models, this suggests that there is a case for taking these into account when interpreting the results of serological studies [27–29].
A threshold of 93% contact-adjusted immunity was found to best predict outbreaks in the subsequent decade, with approximately 70% of countries correctly assessed to either be facing large outbreaks or not. However, in the absence of any more detailed information on setting-specific basic reproduction numbers, such a threshold will only ever be an approximation. On the other hand, setting-specific parameters are difficult to establish, are subject to method-specific biases and can span a wide range of values [31, 41]. In principle, country-specific reproduction numbers should depend on the frequency and types of contact within the population and should therefore be amenable to measurement in contact studies such as the ones used here. Yet, scaling estimated susceptibility levels with the relative number of contacts reported in each study gave no improved results over the simpler version not using such scaling. In other words, while there probably are differences in R0 between countries, these do not appear to be identifiable in contact studies. At the same time, the contact studies do have value in giving different weights to different age groups when calculating contact-adjusted immunity depending on their contact patterns. We therefore argue that while the achieved 70% of accuracy in predicting outbreaks is far from perfect, aiming to achieve 93% or greater contact-adjusted immunity in a population is a pragmatic choice that can be informed by measurable quantities, that is age-specific immunity levels and mixing patterns.
Current guidelines on target immunity levels are based on estimates derived almost 20 years ago, and were based on assumed mixing patterns matched to pre-vaccination data from England and Wales. We have used transmission models in combination with recently observed age-specific contact patterns from a variety of European and some non-European settings to assess whether these guidelines are sufficient for achieving measles elimination. We investigated a range of settings with different demographic profiles and cultural contexts: from high-income settings characterised by low birth rates and an ageing population (e.g., Germany or the UK) to having more (Vietnam) or less (Taiwan) recently undergone the demographic transition to low birth rates, or characterised by a high birth rate and young population (Uganda). With observed mixing patterns, several settings were found to be at risk of outbreaks even if they achieved previously recommended target immunity levels, including ones with very different demographic profiles. Achieving 95% immunity in 5-to-9-year-olds, on the other hand, would reduce transmission sufficiently to achieve elimination in all except the most extreme scenarios.
The importance of immunity levels in 5-to-9-year-olds presents both a challenge and an opportunity: Levels as high as 95% in this age group can only be maintained through high levels of two-dose immunisation prior to school entry. At the same time, entering this age group coincides with school entry, which involves a level of organisation that provides the opportunity to both check the immunisation status of children and offer additional vaccinations if necessary. The experience of the Pan-American Health Organization in eliminating measles supports these findings. A key component to interrupting measles virus transmission were periodic 'follow-up' vaccination campaigns of pre-school children, timed at 4-year intervals to ensure high immunisation by the time of school entry [42, 43]. Studies in the USA, where measles was eliminated in 2000, suggest that different minimum vaccine coverage levels were required to prevent measles virus transmission among different age groups [44]. School-aged populations accounted for the majority of measles cases between 1976 and 1988, and compulsory vaccination as part of school attendance laws played an important role in reducing measles incidence on the path to elimination [45]. Where there were less stringent vaccination requirements at school entry, more cases of measles were observed [46]. Analyses of pre-elimination measles outbreaks in the USA indicated that transmission occurred among highly vaccinated school-aged populations, suggesting that higher population immunity levels were needed among school-aged children compared to preschool-aged children [47]. It has been proposed that minimum coverage levels as low as 80% at the second birthday of children may be sufficient to prevent transmission among preschool-aged children in the USA if population immunity is at least 93% among over-5-year-olds [48].
While our results stress the role of 5-to-9-year-olds, they also highlight the importance of not having gaps in immunity in older age groups. This is particularly important close to elimination as a lower force of infection pushes cases into older age groups [49]. Given the higher rate of complications of measles when experienced at older age, ensuring immunity among adults will be important not only for interrupting transmission, but also to prevent serious episodes of disease [50].
Our study has several limitations. The delineation of countries into having experienced outbreaks or not is somewhat arbitrary, if in agreement with a milestone towards measles eradication established by the World Health Assembly [51]. Furthermore, the applied threshold of 93% was found to best distinguish between countries that experienced outbreaks and those that did not, but similar performance would have been achieved with thresholds of 92% and, to a slightly lesser extent, 94%. These values correspond roughly to the commonly used range of 12–18 for the basic reproduction number. Applying these thresholds would have had strong consequences for the assessment of elimination prospects for different immunity profiles, leading to higher (for a threshold of 94%) or lower (for a threshold of 92%) age-specific immunity targets. Depending on the local situation with respect to measles elimination, a country may therefore decide to apply less or more stringent immunity thresholds. Moreover, population immunity represents past levels of vaccine coverage or natural infection which may not be reflective of the future. For example, immunity may be high just after a major outbreak but such outbreaks could occur again if coverage is sub-optimal. In addition, population migration can change immunity levels in a way that is not captured by vaccination coverage figures. An important caveat is therefore that seeing immunity sufficient to interrupt transmission does not guarantee that elimination is maintained if current levels of coverage are insufficient.
We assumed that immunity levels and contact patterns alone are sufficient to predict the expected case load. In reality, numerous co-factors such as sub-national heterogeneity or contact patterns that are not captured in age-specific contact matrices (e.g. household and schooling structures) could have influenced this relationship. In fact, the contact-adjusted immunity levels we estimated from serological studies did not always correctly predict where outbreaks could be expected. On the one hand, Latvia did not experience large numbers of cases in spite of low levels of contact-adjusted immunity. It was among the smallest in our group of countries for which we had serological data available and may have been at lower risk of imported cases. Still, they would have been expected to have seen more cases given the results of the serological studies in 2003 and 2004, respectively. Immunity levels were as low as 76% among all age groups and 62% in 5- to 9-year-olds in 2003, but only 16 cases of measles were reported in the 10 years 2004–2013. Even with the high rates of vaccination coverage (95% coverage of both first and second dose) over these 10 years, outbreaks would have been expected within the age cohorts with large amounts of susceptibility. To our knowledge, there were no supplementary immunisation activities that could explain the absence of outbreaks. It would be of value to determine whether the country is now at high risk of large outbreaks in spite of having previously interrupted transmission, or whether there were issues with the serological tests conducted at the time.
Israel and Spain, on the other hand, experienced large numbers in spite of high levels of contact-adjusted immunity. Three potential causes for this discrepancy suggest themselves: First, in Spain, the samples were collected in 1996 when there was an ongoing large measles outbreak and may therefore not reflect population-level immunity in the years following. Second, drops in vaccination coverage as well as vaccination campaigns may have changed the risk of outbreaks during the 10 years following the serological studies, although we found nothing in the publicly available national-level vaccination data to suggest any significant changes. Both Spain and Israel consistently reported 94% first-dose MMR coverage in the years following the seroprevalence studies. Third, serology based on residual and population-based samples may not always be representative of relevant immunity levels. In Spain, a disproportionate number of cases occurred in young adults [11], but there was nothing in the serological data to suggest that this might be expected. Moreover, if those lacking immunity are preferentially in contact with each other because they cluster socially or geographically, outbreaks could occur in these groups; population-level serology might not provide a good estimate of realised immunity levels in outbreak settings. In Israel, outbreaks occurred in orthodox religious communities with very low vaccination coverage [52]. More generally, herd immunity thresholds have been shown to increase if non-vaccination is clustered [53].
These examples highlight that taking into account heterogeneity is crucial. Our method can be applied to lower levels than countries, such as municipalities or counties. Further sub-national serological and epidemiological studies, particularly in low-income countries at high risk of measles outbreaks, could generate key insights on the relationship between immunity levels, heterogeneity of susceptibility and outbreak risk [54, 55]. At the same time, further studies of contact patterns across settings, combined with models of such patterns where no data have been collected, will make it possible to expand our results to other countries and regions [56].
We have shown that combining national measles seroprevalence studies with data on contact patterns increases their utility in predicting the expected case load and in assessing how close a country is to eliminating measles. Comparing past seroprevalence levels to the case load in the following decade enabled us to establish a threshold level of 93% contact-adjusted immunity that appeared sufficient to ensure elimination. Translating this into target age-specific immunity levels that would be necessary to achieve this level of immunity, we found that greater immunity in 5–9-year-olds is needed than was previously recommended. While such high levels can be difficult to achieve, school entry provides an opportunity to ensure sufficient vaccination coverage. Combined with observations of contact patterns, further national and sub-national serological studies could serve to highlight key gaps in immunity that need to be filled in order to achieve national and regional measles elimination.
Contact data are available from the Zenodo Social Contact Data Repository (https://zenodo.org/communities/social_contact_data), except the data for Uganda which is available from the original publication (le Polain de Waroux et al., 2018), and the data for 5 South East Asian countries (Cambodia, Indonesia, Taiwan, Thailand, Vietnam) which are based on the Social Mixing for Influenza Like Illness (SMILI) project. For access to these data, researchers should contact John Edmunds (john@[email protected]) or Jonathan Read ([email protected]).
National-level measles case data were downloaded from the World Health Organization (https://www.who.int/immunization/monitoring_surveillance/data/en/).
Seroprevalence data are based on the European Sero-Epidemiology Network 2 (ESEN2) project. Aggregate data have been published previously. For access to the individual-level data, researchers should contact Richard Pebody ([email protected]).
All the results in this paper can be reproduced using code at https://github.com/sbfnk/immunity.thresholds.
ESEN2:
European Sero-Epidemiology Network 2
MCE:
Misclassification error
World Health Organization. Global measles and rubella strategic plan 2012–2020. Geneva: WHO Press; 2012.
Roberts L. Is measles next?Science. 2015; 348(6238):958–63. https://doi.org/10.1126/science.348.6238.958.
Strebel P, Cochi SL, Hoekstra E, Rota PA, Featherstone D, Bellini W, Katz SL. A world without measles. J Infect Dis. 2011; 204 Suppl 1:1–3. https://doi.org/10.1093/infdis/jir111.
Kupferschmidt K. Public health. Europe's embarrassing problem. Science. 2012; 336(6080):406–7. https://doi.org/10.1126/science.336.6080.406.
Fine PE, Eames K, Heymann DL. "Herd immunity": a rough guide. Clin Infect Dis. 2011; 52(7):911–6.
Nokes D, Anderson R. The use of mathematical models in the epidemiological study of infectious diseases and in the design of mass immunization programmes. Epidemiol Infect. 1988; 101(1):1–20. https://doi.org/10.1017/S0950268800029186.
Ramsay M. A strategic framework for the elimination of measles in the European Region. Copenhagen: Regional Office for Europe; 1999.
Borčić B, Mažuran R, Kaić B. Immunity to measles in the croatian population. Eur J Epidemiol. 2003; 18(11):1079–83. https://doi.org/10.1023/a:1026109201399.
Pistol A, Hennessey K, Pitigoi D, Ion-Nedelcu N, Lupulescu E, Walls L, Bellini W, Strebel P. Progress toward measles elimination in romania after a mass vaccination campaign and implementation of enhanced measles surveillance. Vaccine. 2003; 187(Supplement 1):217–22. https://doi.org/10.1086/368228.
Andrews N, Tischer A, Siedler A, Pebody RG, Barbara C, Cotter S, Duks A, Gacheva N, Bohumir K, Johansen K, Mossong J, Ory F. d., Prosenc K, Sláčiková M, Theeten H, Zarvou M, Pistol A, Bartha K, Cohen D, Backhouse J, Griskevicius A. Towards elimination: measles susceptibility in australia and 17 european countries. Bull World Health Organ. 2008; 86:197–204.
Peña-Rey I, Martínez de Aragón V, Mosquera M, de Ory F, Echevarrıa JE, Measles Elimination Plan Working Group in Spain. Measles risk groups in Spain: implications for the European measles-elimination target. Vaccine. 2009; 27:3927–34. https://doi.org/10.1016/j.vaccine.2009.04.024.
Theeten H, Hutse V, Hens N, Yavuz Y, Hoppenbrouwers K, Beutels P, Vranckx R, van Damme P. Are we hitting immunity targets? the 2006 age-specific seroprevalence of measles, mumps, rubella, diphtheria and tetanus in belgium. Epidemiol Infect. 2010; 139(4):494–504. https://doi.org/10.1017/s0950268810001536.
Poethko-Müller C, Mankertz A. Sero-epidemiology of measles-specific IgG antibodies and predictive factors for low or missing titres in a German population-based cross-sectional study in children and adolescents (KiGGS). Vaccine. 2011; 29(45):7949–59. https://doi.org/10.1016/j.vaccine.2011.08.081.
Mollema L, Smits G, Berbers G, Van Der Klis F, Van Binnendijk R, De Melker H, Hahné S. High risk of a large measles outbreak despite 30 years of measles vaccination in the Netherlands. Epidemiol Infect. 2014; 142(5):1100–8.
Keenan A, Ghebrehewet S, Vivancos R, Seddon D, MacPherson P, Hungerford D. Measles outbreaks in the UK, is it when and where, rather than if? a database cohort study of childhood population susceptibility in liverpool, UK. BMJ Open. 2017; 7(3):014106. https://doi.org/10.1136/bmjopen-2016-014106.
Tomášková H, Zelená H, Kloudová A, Tomášek I. Serological survey of measles immunity in the Czech Republic, 2013. Cent Eur J Public Health. 2018; 26(1):22–7. https://doi.org/10.21101/cejph.a5251.
Gay NJ. The theory of measles elimination: implications for the design of elimination strategies. J Infect Dis. 2004; 189 Suppl 1:27–35. https://doi.org/10.1086/381592.
Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, Tomba GS, Wallinga J, Heijne J, Sadkowska-Todys M, Rosinska M, Edmunds W. Social contacts and mixing patterns relevant to the spread of infectious diseases. PLoS Med. 2008; 5(3):74. https://doi.org/10.1371/journal.pmed.0050074.
Danon L, Read JM, House TA, Vernon MC, Keeling MJ. Social encounter networks: characterizing Great Britain. Proc R Soc B. 2013; 280(1765):20131037.
Horby P, Thai PQ, Hens N, Yen NTT, Thoang DD, Linh NM, Huong NT, Alexander N, Edmunds W, Duong TN, et al. Social contact patterns in Vietnam and implications for the control of infectious diseases. PLoS ONE. 2011; 6(2):16965.
Read JM, Lessler J, Riley S, Wang S, Tan LJ, Kwok KO, Guan Y, Jiang CQ, Cummings DA. Social mixing patterns in rural and urban areas of southern China. Proc R Soc Lond B Biol Sci. 2014; 281(1785):20140268.
le Polain de Waroux O, Cohuet S, Ndazima D, Kucharski AJ, Juan-Giner A, Flasche S, Tumwesigye E, Arinaitwe R, Mwanga-Amumpaire J, Boum Y, Nackers F, Checchi F, Grais RF, Edmunds W. Characteristics of human encounters and social mixing patterns relevant to infectious diseases spread by close contact: a survey in Southwest Uganda. BMC Infect Dis. 2018; 18(1):172. https://doi.org/10.1186/s12879-018-3073-1.
Melegaro A, Del Fava E, Poletti P, Merler S, Nyamukapa C, Williams J, Gregson S, Manfredi P. Social contact structures and time use patterns in the Manicaland Province of Zimbabwe. PLoS ONE. 2017; 12(1):0170459.
Read JM, Edmunds W, Riley S, Lessler J, Cummings DAT. Close encounters of the infectious kind: methods to measure social mixing behaviour. Epidemiol Infect. 2012; 140(12):2117–30. https://doi.org/10.1017/S0950268812000842.
Smieszek T, Barclay VC, Seeni I, Rainey JJ, Gao H, Uzicanin A, Salathé M. How should social mixing be measured: comparing web-based survey and sensor-based methods. BMC Infect Dis. 2014; 14:136. https://doi.org/10.1186/1471-2334-14-136.
Smieszek T, Castell S, Barrat A, Cattuto C, White PJ, Krause G. Contact diaries versus wearable proximity sensors in measuring contact patterns at a conference: method comparison and participants' attitudes. BMC Infect Dis. 2016; 16(1):341.
Wallinga J, Teunis P, Kretzschmar M. Using data on social contacts to estimate age-specific transmission parameters for respiratory-spread infectious agents. Am J Epidemiol. 2006; 164(10):936–44. https://doi.org/10.1093/aje/kwj317.
Meyer S, Held L. Incorporating social contact data in spatio-temporal models for infectious disease spread. Biostatistics. 2016. https://doi.org/10.1093/biostatistics/kxw051. http://arxiv.org/abs/1512.01065v2.
Santermans E, Goeyvaerts N, Melegaro A, Edmunds W, Faes C, Aerts M, Beutels P, Hens N. The social contact hypothesis under the assumption of endemic equilibrium: elucidating the transmission potential of vzv in europe. Epidemics. 2015; 11:14–23. https://doi.org/10.1016/j.epidem.2014.12.005.
Diekmann O, Heesterbeek JAP, Roberts MG. The construction of next-generation matrices for compartmental epidemic models. J R Soc Interface. 2010; 7(47):873–85. https://doi.org/10.1098/rsif.2009.0386.
Guerra FM, Bolotin S, Lim G, Heffernan J, Deeks SL, Li Y, Crowcroft NS. The basic reproduction number (r 0) of measles: a systematic review. Lancet Infect Dis. 2017. https://doi.org/10.1016/S1473-3099(17)30307-9.
Centers for Disease Control and Prevention. CDC Health Information for International Travel 2014: The Yellow Book. Oxford: Oxford University Press; 2014.
Leung K, Jit M, Lau EHY, Wu JT. Social contact data for Hong Kong. Version. 2018. https://doi.org/10.5281/zenodo.1165562.
Melegaro A, Fava ED, Poletti P, Merler S, Nyamukapa C, Williams J, Gregson S, Manfredi P. Zimbabwe social contact data. Version. 2017. https://doi.org/10.5281/zenodo.1127694.
Grijalva CG, Goeyvaerts N, Verastegui H, Edwards KM, Gil AI, Lanata CF, Hens N. Peruvian social contact data.Version 1.0. 2017. https://doi.org/10.5281/zenodo.1215891.
Mossong J, Hens N, Jit M, Beutels P, Auranen K, Mikolajczyk R, Massari M, Salmaso S, Tomba GS, Wallinga J, Heijne J, Sadkowska-Todys M, Rosinska M, Edmunds W. POLYMOD social contact data. Version 1.1. 2017. https://doi.org/10.5281/zenodo.1215899.
Béraud G, Kazmercziak S, Beutels P, Levy-Bruhl D, Lenne X, Mielcarek N, Yazdanpanah Y, Boëlle P-Y, Hens N, Dervaux B. France social contact data. Version. 2018. https://doi.org/10.5281/zenodo.1158452.
R Core Team. R: a language and environment for statistical computing. Vienna: R Foundation for Statistical Computing; 2018. https://www.R-project.org/.
Funk S. Socialmixr: social mixing matrices for infectious disease modelling. 2018. R package version 0.0.1. https://cran.r-project.org/package=socialmixr.
Funk S. Epimixr: epidemiological analysis using social mixing matrices. 2018. R package version 0.0.1. https://github.com/sbfnk/epimixr.
Li J, Blakeley D, Smith RJ. The failure of r 0. Comput Math Methods Med. 2011; 2011:1–17. https://doi.org/10.1155/2011/527610.
de Quadros C, Hersh B, Nogueira A, Carrasco P, da Silveira C. Measles eradication: experience in the americas. MMWR Morb Mortal Wkly Rep. 1999; 48(SU01):57–64.
Andrus JK, de Quadros CA, Solórzano CC, Periago MR, Henderson D. Measles and rubella eradication in the americas. Vaccine. 2011; 29 Suppl 4:91–6. https://doi.org/10.1016/j.vaccine.2011.04.059.
Orenstein WA, Papania MJ, Wharton ME. Measles elimination in the united states. J Infect Dis. 2004; 189(Suppl 1):1–3.
Centres for Disease Control and Prevention. School immunization requirements for measles – united states, l982. MMWR Morb Mortal Wkly Rep. 1982; 31:65–7.
Salmon DA, Teret SP, MacIntyre CR, Salisbury D, Burgess MA, Halsey NA. Compulsory vaccination and conscientious or philosophical exemptions: past, present, and future. Lancet. 2006; 367(9508):436–42. https://doi.org/10.1016/S0140-6736(06)68144-0.
Markowitz LE, Preblud SR, Orenstein WA, Rovira EZ, Adams NC, Hawkins CE, Hinman AR. Patterns of transmission in measles outbreaks in the United States, 1985–1986. N Engl J Med. 1989; 320(2):75–81. https://doi.org/10.1056/NEJM198901123200202.
Hutchins SS, Baughman AL, Orr M, Haley C, Hadler S. Vaccination levels associated with lack of measles transmission among preschool-aged populations in the United States, 1989–1991. J Infect Dis. 2004; 189(Supplement_1):108–15.
Anderson RM, May RM. Age-related changes in the rate of disease transmission: implications for the design of vaccination programmes. J Hyg (Lond). 1985; 94(3):365–436.
Orenstein WA, Perry RT, Halsey NA. The clinical significance of measles: a review. J Infect Dis. 2004; 189(Supplement 1):4–16.
World Health Organization. Measles: key facts. 2018. http://www.who.int/news-room/fact-sheets/detail/measles. Archived at http://www.webcitation.org/713YeLhwt. Accessed 20 July 2018.
Anis E, Grotto I, Moerman L, Warshavsky B, Slater PE, Lev B, Israeli A. Measles in a highly vaccinated society: the 2007-08 outbreak in Israel. J Infect. 2009; 59:252–8. https://doi.org/10.1016/j.jinf.2009.07.005.
Truelove SA, Graham M, Moss WJ, Metcalf CJE, Ferrari MJ, Lessler J. Characterizing the impact of spatial clustering of susceptibility for measles elimination. Vaccine. 2019; 37(5):732–41. https://doi.org/10.1016/j.vaccine.2018.12.012.
Metcalf CJE, Farrar J, Cutts FT, Basta NE, Graham AL, Lessler J, Ferguson NM, Burke DS, Grenfell BT. Use of serological surveys to generate key insights into the changing global landscape of infectious disease. Lancet. 2016. https://doi.org/10.1016/S0140-6736(16)30164-7.
Trentini F, Poletti P, Merler S, Melegaro A. Measles immunity gaps and the progress towards elimination: a multi-country modelling analysis. Lancet Infect Dis. 2017; 17(10):1089–97. https://doi.org/10.1016/s1473-3099(17)30421-8.
Prem K, Cook AR, Jit M. Projecting social contact matrices in 152 countries using contact surveys and demographic data. PLoS Comput Biol. 2017; 13(9):1005697. https://doi.org/10.1371/journal.pcbi.1005697.
We would like to thank the ESEN2 group for sharing serological data, and the SMILI group for contact data. We further acknowledge fruitful discussions with members of the World Health Organization Strategic Advisory Group of Experts on measles and rubella.
SF was supported by a Career Development Award in Biostatistics from the UK Medical Research Council (MR/K021680/1) and a Wellcome Trust Senior Research Fellowship in Basic Biomedical Science (210758/Z/18/Z).
Centre for the Mathematical Modelling of Infectious Diseases, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK
Sebastian Funk, Mark Jit & W. John Edmunds
Department of Infectious Disease Epidemiology, London School of Hygiene & Tropical Medicine, Keppel Street, London, UK
Centers for Disease Control and Prevention, 1600 Clifton Rd, Atlanta, GA, USA
Jennifer K. Knapp, Emmaculate Lebo & Susan E. Reef
World Health Organization, Avenue Appia 20, Geneva, Switzerland
Alya J. Dabbagh & Katrina Kretsinger
GAVI Alliance, Chemin du Pommier 40, Le Grand-Saconnex, Switzerland
Peter M. Strebel
Modelling and Economics Unit, National Infections Service, Public Health England, 61 Colindale Avenue, London, UK
Mark Jit
School of Public Health, University of Hong Kong, 7 Sassoon Road, Hong Kong SAR, China
Sebastian Funk
Jennifer K. Knapp
Emmaculate Lebo
Susan E. Reef
Alya J. Dabbagh
Katrina Kretsinger
W. John Edmunds
SF conducted the analyses and wrote the first draft. All authors contributed to subsequent and final drafts. All authors read and approved the final manuscript.
Correspondence to Sebastian Funk.
Institutional ethics approval was not sought because this is a retrospective study, and the databases are anonymised and free of personally identifiable information.
Dr Strebel is currently employed by the US Centers for Disease Control and Prevention and seconded to Gavi the Vaccine Alliance
Additional file 1
Supplementary Figure 1. Mean estimated of contact-adjusted immunity as a function of the number of bootstrap samples. Each line represents one country. (PDF 69 kb)
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License(http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The Creative Commons Public Domain Dedication waiver(http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated.
Funk, S., Knapp, J.K., Lebo, E. et al. Combining serological and contact data to derive target immunity levels for achieving and maintaining measles elimination. BMC Med 17, 180 (2019). https://doi.org/10.1186/s12916-019-1413-7
Social mixing
|
CommonCrawl
|
What is the symmetry of $\mathbb{Z}[\sqrt{1 + i}]$?
If we make a plot of the units and primes from among the Gaussian integers, $\mathbb{Z}[i]$, we see a fourfold symmetry. For example, $2 + i$ and $1 + 2i$. I'm tempted to say there is also eightfold symmetry, but I'm not completely sure, mostly because there are four units, not eight.
The field $\mathbb{Q}(\sqrt{1 + i})$ is of degree $4$ but it has only one intermediate field, $\mathbb{Q}(i)$, according to LMFDB. This suggests $\mathbb{Z}[\sqrt{1 + i}]$ has at least fourfold symmetry.
The fundamental unit is $\sqrt{1 + i} - i - (\sqrt{1 + i})^3$. I've found it very awkward and error-prone to do arithmetic with this number. My calculations suggests that the powers of this unit escape to infinity, like in a real quadratic ring, but my calculations could very easily be wrong. And even if my calculations are correct, I could have misunderstood them.
If we plot the units and primes in the ring of integers of $\mathbb{Q}(\sqrt{1 + i})$, will we find it has at least fourfold symmetry like $\mathbb{Q}(i)$?
algebraic-number-theory
Bob HappBob Happ
$\begingroup$ I think I know what you mean by awkward, error-prone arithmetic. But that's why we've got computers. For example, even in Wolfram Alpha you can do Table[N[(Sqrt[1 + I] - I - Sqrt[1 + I]^3)^n], {n, 0, 19}] and you will readily see that it indeed gets farther and farther away from 0. $\endgroup$ – Mr. Brooks Sep 12 '19 at 22:50
You have to be careful with something like this. It's a quadratic extension of a quadratic extension, where the situation can be various. There easily could be just two symmetries, for instance.
It seems clear that you haven't seen any Galois Theory. You have to check whether your extension is normal. For this you find the minimal polynomial of $\alpha=\sqrt{1+i}$, and see whether all roots are expressible as polynomial expressions (or rational, but that shouldn't be necessary here) in $\alpha$. Once you do this, you should be able to find the symmetries easily.
LubinLubin
Not the answer you're looking for? Browse other questions tagged algebraic-number-theory or ask your own question.
If $u=\frac{1+\sqrt5}{2}$, then $u^3=2+\sqrt5$, but $u^2=\frac{3+\sqrt5}{2}$. What is the group that measures the power that makes units look nice?
$(1-\zeta_m)$ is a unit in $\mathbb{Z}[\zeta_m]$ if m contains at least two prime factors
$\mathbb{Q}(\sqrt{23})$ is not a Euclidean number field.
How should I construct $\color{Green}{\text{explicitly}}$ the associated torus to the general quadratic field $ \mathbb{Q} \sqrt d $?
What are the units in $\mathbb{Q}(\sqrt{2}, \sqrt{3})$?
ideals in $\mathbb{Q}(\sqrt{-5})$ with norm less than $100$?
Find a number field whose unit group is isomorphic to $\mathbb{Z}/4\mathbb{Z} \times \mathbb{Z}$
|
CommonCrawl
|
OSA Publishing > Biomedical Optics Express > Volume 11 > Issue 11 > Page 6093
Christoph Hitzenberger, Editor-in-Chief
Analysis of attenuation coefficient estimation in Fourier-domain OCT of semi-infinite media
Babak Ghafaryasl, Koenraad A. Vermeer, Jeroen Kalkman, Tom Callewaert, Johannes F. de Boer, and Lucas J. Van Vliet
Babak Ghafaryasl,1,2,* Koenraad A. Vermeer,1 Jeroen Kalkman,2 Tom Callewaert,2 Johannes F. de Boer,3 and Lucas J. Van Vliet2
1Rotterdam Ophthalmic Institute, Rotterdam Eye Hospital, Rotterdam, 3011 BH, The Netherlands
2Department of Imaging Physics, Delft University of Technology, Delft, 2628 BL, The Netherlands
3Department of Physics and Astronomy, Vrije Universiteit Amsterdam, 1081 HV, The Netherlands
*Corresponding author: [email protected]
Koenraad A. Vermeer https://orcid.org/0000-0002-4038-3945
Jeroen Kalkman https://orcid.org/0000-0003-1698-7842
B Ghafaryasl
K Vermeer
J Kalkman
T Callewaert
J de Boer
L Van Vliet
•https://doi.org/10.1364/BOE.403283
Babak Ghafaryasl, Koenraad A. Vermeer, Jeroen Kalkman, Tom Callewaert, Johannes F. de Boer, and Lucas J. Van Vliet, "Analysis of attenuation coefficient estimation in Fourier-domain OCT of semi-infinite media," Biomed. Opt. Express 11, 6093-6107 (2020)
Depth-resolved model-based reconstruction of attenuation coefficients in optical coherence tomography (BOE)
Robust, accurate depth-resolved attenuation characterization in optical coherence tomography (BOE)
Determination of confocal profile and curved focal plane for OCT mapping of the attenuation coefficient (BOE)
Table of Contents Category
Attenuation coefficient
Multiple scattering
Tissue optical properties
Revised Manuscript: September 26, 2020
Manuscript Accepted: September 28, 2020
The attenuation coefficient (AC) is an optical property of tissue that can be estimated from optical coherence tomography (OCT) data. In this paper, we aim to estimate the AC accurately by compensating for the shape of the focused beam. For this, we propose a method to estimate the axial PSF model parameters and AC by fitting a model for an OCT signal in a homogenous sample to the recorded OCT signal. In addition, we employ numerical analysis to obtain the theoretical optimal precision of the estimated parameters for different experimental setups. Finally, the method is applied to OCT B-scans obtained from homogeneous samples. The numerical and experimental results show accurate estimations of the AC and the focus location when the focus is located inside the sample.
© 2020 Optical Society of America under the terms of the OSA Open Access Publishing Agreement
Optical coherence tomography (OCT) has been widely used to capture structural information of tissues in clinical tasks such as the diagnosis of retinal and vascular diseases. Previously, extracting valuable information embedded in the signal intensity of constituting tissues has been investigated. An optical property such as the attenuation coefficient (AC) offers valuable information that can be estimated from the intensity of the OCT signal. It has the potential to act as a biomarker for the diagnosis and monitoring of chorioretinal diseases [1], breast tumor lesions [2], renal tumors [3,4], oral cancer [5], rectal cancer [6] and several other applications such as atherosclerotic plaque characterization [7–9]. Several methods based on single [10,11,12] and multiple [13,14] scattering of light have been presented for estimating the attenuation coefficient in a homogeneous medium using OCT. Recently, a depth-resolved single-scattering based method has been developed by Vermeer et al. [15] for estimating attenuation coefficients in inhomogeneous mediums, e.g. in tissues. For all methods that estimate the attenuation coefficient, the OCT signal must be corrected for: 1) the depth-dependent noise floor [16]; 2) the so-called roll-off, i.e. the depth-dependent signal decay caused by discrete signal detection and resolution limitations of the detection process [15,17]; and 3) the axial point spread function (PSF), which, for a Gaussian-shaped beam, is governed by the effective Rayleigh length around the focus position of the beam [15,18]. Compensation for noise and roll-off is nowadays a standard procedure, which can be done with a function obtained from a fit to reference data [17]. However, in order to correct for the axial PSF, in many cases its model parameters need to be estimated from the acquired data since the effective Rayleigh length and focus depend on the optical system, e.g. in case the cornea and lens. We showed in previous work how the attenuation coefficient is sensitive to an error in the estimated parameters of the axial PSF model [19]. Therefore, accurate and precise estimation of these parameters is required to achieve an unbiased and precise estimation of attenuation coefficient. Various methods have been developed to estimate the attenuation coefficient of the tissue while taking into account the effect of the beam shape that influence the acquired OCT signal. Smith et al. [20] compensate for the effect of focus using an existing model of the shape of the beam. However, in their work the parameters of the shape of the beam need to be known in advance. In many medical applications, such as ophthalmology, the location of the focal point varies and there is a need for a method to automatically estimate the focus location to compensate for the effect of the beam shape in the estimation of the attenuation coefficient. Stefan et al. [21] introduced a method to estimate the attenuation coefficient using two B-scans to first estimate the location of focus and afterwards estimating the attenuation coefficient from a single scattering model of the OCT light after compensating for the effect of beam shape. This method is dependent on having identical A-lines to be able to eliminate the effect of attenuation coefficients. However, this proposed method was only tested with static samples where the identical physical location in both B-scans is feasible and the factors such as beam's angle of incidence can be controlled to ensure a similar tissue attenuation coefficient. Another limitation of this method is the necessity to have access to two scans from the same position in the tissue. However, in many clinical data, such as retinal scans, only one averaged measurement of the same tissue's location is available.
In this paper, we aim to achieve an accurate estimate of attenuation coefficient by compensating for all of the aforementioned effects on the recorded OCT signal. To do so, we propose a method to estimate the axial PSF model parameters (focus depth and Rayleigh length in the medium) and attenuation coefficient by fitting a single scattering based model for a homogenous sample OCT signal to the recorded OCT signal after subtraction of the depth-dependent noise floor and compensating for roll-off. In addition, a Cramér-Rao analysis is performed to theoretically determine the attainable precision of the estimated parameters and to investigate the limitations of the proposed procedure for various experimental configurations. Monte Carlo simulations of the estimation method are performed to evaluate the robustness of the method and compare the precision of the theoretical lower bound produced by the Cramér-Rao analysis and to show a possible bias in the estimated parameters. Finally, the method is applied to B-scans obtained with an experimental OCT system from homogeneous samples with various concentrations of TiO2 particles dispersed in silicone to assess the precision and accuracy of the method.
In this section, we introduce a method for accurate estimation of the attenuation coefficient in a homogenous (or single layer) sample by compensating the recorded OCT signal for the noise floor, roll-off, and axial PSF.
2.1 Estimating the model parameters
In a single scattering model of light presented by Faber et al. [22], the Fourier-domain OCT signal at physical depth z in a homogeneous sample may be expressed by,
(1)$$S(z) = R(z)\frac{1}{{{{\left( {\frac{{z - {z_0}}}{{2{z_R}}}} \right)}^2} + 1}}C{e^{ - 2\mu z}} + N(z) + \varepsilon (z), $$
where the first term models the signal decay caused by three factors (from left to right): the roll-off (expressed by R(z)), the axial PSF modeled by a Cauchy function at focus position z0 and scaled by the Rayleigh length zR [18], and the signal attenuation modeled by the attenuation coefficient μ and scaling factor C. The second term, N(z) is the depth-dependent noise floor and can be obtained by averaging over a large number of A-lines without a sample in the sample arm of the OCT system. The intensity of the OCT signal has an exponential distribution caused by speckle noise. However, due the central limit theorem, by averaging over a sufficiently large number of neighboring A-lines (>30 based on rule of thumb) with exponential distributions, the averaged OCT signal at depth z tends toward a normal distribution ${{\cal N}}[{m(z ),m{{(z )}^2}/N} )]$, with m(z) being the expected value of the exponential distribution, and $m{(z )^2}/N$ the variance of the resulting normal distribution. The third term ɛ(z) represents this speckle noise. In addition, the roll-off can be measured and the signal can be corrected for roll-off by performing the operation $A(z) = (S(z) - N(z))/R(z)$ [15].
We estimate the model parameters of the axial PSF and the attenuation coefficient using a maximum likelihood estimator. For this, Eq. (1) was fitted to the measurements. For an averaged A-line $A(z )$ with ${N_D}\; $ data-points as a function of z, the independent parameters $C,\; $µ, z0 and ${z_R}$ can be estimated by minimizing the sum of squared residuals, $\chi ,$ given by,
(2)$$\chi = \sum\limits_{j = 1}^{{N_D}} {{{\left[ {A({z_j}) - C\frac{{{e^{ - 2\mu {z_j}}}}}{{{{\left( {\frac{{{z_j} - {z_0}}}{{2{z_R}}}} \right)}^2} + 1}}} \right]}^2}}, $$
where subscript j is an index that denotes the data-point number on each averaged A-line.
2.2 Model selection and evaluation
To design a reliable model-based method for estimating the axial PSF from recorded data, we studied the influence of integrating prior information into the model, such as a known or joint model parameter among multiple averaged A-lines, to reduce the degrees of freedom and aiming to thereby improve the estimation precision of the remaining parameters. Moreover, the attainable precision of the estimated parameters $\{{{\theta_1}, \ldots ,{\theta_N}} \}= \{ C,\mu ,{z_0},{z_R}\}$ needs to be calculated for various experimental setups. Exploring the precision of the estimated parameters such as the focus depth into the sample, the Rayleigh length, illumination intensity and the attenuation coefficient of the medium, enables us to optimize the experimental design and to know the limitations of the proposed method. For these purposes, a Cramér-Rao analysis was applied using a derivation of the Fisher information matrix for a Gaussian noise model (see equations (9)-11 from Caan et al. [23]). Cramer-Rao analysis is limited to finding the minimal variance of the model parameters assuming an unbiased estimator. To evaluate the optimal precision of the estimated parameters and to compare different models and experimental setups, we use the relative errors, as provided by the diagonal elements of the relative Cramér-Rao lower bound (rCRLB) matrix [23]. The diagonal elements are the relative theoretical lower bounds on the variance of the unbiased estimators of each parameter. We intuitively considered an estimation error lower than 10% to be acceptable for the purpose of this paper.
Multiple averaged A-lines can be used to estimate the model parameters. For this, let $A(z) = \{ {A_1}(z),{A_2}(z),\ldots ,{A_{{N_A}}}(z)\}$ be a set of ${N_A}$ averaged A-lines with ${N_D}\; $ data-points on each A-line. For a matrix of ${N_A} \times {N_D}$ averaged A-lines, any unknown parameter among C, µ, z0, zR$\; $ can be estimated by minimizing the sum of squared residuals, $\chi ,$ given by,
(3)$$\chi = \sum\limits_{i = 1}^{{N_A}} {\sum\limits_{j = 1}^{{N_D}} {{{\left[ {{A_i}({z_j}) - C\frac{{{e^{ - 2\mu {z_j}}}}}{{{{\left( {\frac{{{z_j} - {z_0}}}{{2{z_R}}}} \right)}^2} + 1}}} \right]}^2}} }$$
where the parameters can be considered to be common (joint), fixed (known) or independent among the averaged A-lines. Table 1 lists seven models with different degrees of freedom by defining some of the parameters fixed (known), or by defining them as a common (joint) parameter to be estimated among different averaged A-lines. Such an evaluation assists us in choosing the model with the highest estimation precision while considering the feasibility of its implementation under experimental conditions. In Cramér-Rao analysis due to interdependency between the averaged A-lines, we assume that having a fixed or common parameter is equivalent since compared to the variance of the estimated parameters for every averaged A-line, the amount of joint information is large. Therefore, we expect the error we estimate for fixed parameters to be a good approximation of the error that we obtain in the estimation of common (joint) parameters. In the next step, we used Monte Carlo simulations to show a possible bias and investigate if the simulations achieve the precisions given by the Cramér-Rao bound. This was performed by generating a large set of simulated OCT signals using Eq. (1) and estimating the model parameter with the proposed method for different parameter values.
Table 1. Overview of different OCT signal models with different degrees of freedom (DOF). The unknown independent parameters among the averaged A-lines which need to be estimated are indicted by "Indep." and the fixed (known) or common (joint) parameters are shown by "Fix/Com." in the table.
View Table | View all tables in this article
In this section, we first present the statistical analysis and numerical simulations to study the performance of the different models in Table 1 and estimation method using Cramér-Rao analysis and Monte Carlo simulation, respectively. This provides insight into the available information embedded in the data for different experimental setups and models with different degrees of freedom. Finally, we present the experimental results on a homogeneous phantom to assess the real-life performance of the proposed method.
3.1 Model selection by Cramér-Rao analysis
A Cramér-Rao analysis was performed to assess the amount of information present in the data and the impact thereof on the attainable precision for all model parameters. Equation (1) was used to simulate OCT depth profiles. A simulated (thick) homogeneous sample with a refractive index of 1.44, a physical thickness of 1 mm and an attenuation coefficient of 0.72$\; \textrm{m}{\textrm{m}^{ - 1}}$ was located at the zero-delay line. The model parameters in Eq. (1) were set to ${z_R}$ = 42 µm, C = 2.5${\times} $104 and ${z_0}$ = 160 µm inside the sample. Each A-line consisted of 788 pixels and the physical axial pixel size $\Delta{z}$ of the system in air was set to 1.27 µm. For a realistic simulation, the OCT signal was distorted by exponential noise with the intensity-dependent mean at each depth, equal to the expected values of S(z) in Eq. (1). To reduce the noise, we averaged over single A-lines as explained in section 2.1. An example of a simulated single A-line is presented in Fig. 1(a), together with an averaged (over 500 simulated A-lines) OCT signal. The noise of the averaged A-line resembles an intensity-dependent Gaussian distribution due to the central limit theorem. We obtained the intensity-dependent standard deviations for all averaged A-lines using 1000 observations of the simulated averaged-A-lines. In Fig. 1(b), we show the rCRLB values after averaging N single A-lines. The diagonal elements of the rCRLB matrix represent the optimal precision of the estimated model parameters for the aforementioned intensity-dependent standard deviations. As is shown, by averaging over 500 A-lines, the estimation error of µ remains below 10%. Averaging of 500 A-lines is therefore used for the simulated and measured OCT signals in the following sections. The calculated rCRLB matrix for the simulated signals shown in Fig. 2. As seen in this figure, the model parameter estimation errors remain below 5%.
Fig. 1. (a) The simulated single (blue) and averaged (red) OCT signals distorted by intensity-dependent Gaussian noise as a function of depth in physical distance. The averaged OCT signal was obtained by averaging over 500 single A-lines. (b) rCRLB values after averaging 1 to 1000 A-lines. The model parameter were set to ${z_0}$ = 160 µm, µ = 0.72$\; \textrm{m}{\textrm{m}^{ - 1}},\; \; C$ = 2.5×104, and ${z_R}$ = 42 µm.
Download Full Size | PPT Slide | PDF
Fig. 2. rCRLB matrices for models with intensity-dependent Gaussian noise. The model parameters were set to ${z_0}$ = 160 µm, µ = 0.72$\; \textrm{m}{\textrm{m}^{ - 1}},\; \; C$ = 2.5×104, and ${z_R}$ = 42 µm.
In addition, we investigate the precision of the estimated parameters for different degrees of freedom imposed on the model. The diagonal elements of the rCRLB matrix for the seven models shown in Table 1 are depicted in Fig. 3. As it can be observed, incorporating prior knowledge by fixing the parameter values on different combinations of ${z_0}$, C, and ${z_R}$ results in a better precision of parameter µ for depth-variant noise as indicated by the smaller values for Model 1…Model 6 compared to Model 7 as defined in Table 1.
Fig. 3. The diagonal elements of the rCRLB matrices for the seven models (M1…M7) as shown in Table 1.
3.2 Experiment design by Cramér-Rao analysis
To assess the attainable precision under different experimental conditions, we calculated the rCRLB matrices as a function of one of the model parameters while keeping the other ones fixed. The rCRLB values are shown in Fig. 4 for a range of parameter values. In this figure, the horizontal dashed lines indicate the acceptable error (below 10%) and the vertical dashed lines indicate the set parameter values in the simulation and also were considered to be fixed for the other plots in this figure. In Fig. 4(a), it can be observed that when the focus location is above the sample, the estimation error for parameters ${z_o}$ and µ is exceeding 10%. In addition, when the focus location is less than 0.08 mm inside the sample, the estimation error for µ is within 10% to 40% and it remains below 10% for the deeper focus locations. The estimation errors for C and ${z_R}$ remain below 10% when the focus is located inside the sample. Figure 4(b) shows that for a Rayleigh length below 300 µm the estimation errors of µ remains below 10%. The estimation error of ${z_0}$ increases to above 10% for Rayleigh lengths larger than 250 µm. The estimation errors for C and zR remain below 10% for Rayleigh lengths below 500 µm. By varying the attenuation of the sample, Fig. 4(c) shows that the estimation errors of µ as well as ${z_0}\; $ remain below 10% for the attenuation coefficient values between 0.25 mm−1 and 8 mm−1. Figure 4(d) shows that by increasing the light intensity the precision of all estimated parameters remains the same due to the intensity-dependent noise.
Fig. 4. The error (%) of the estimated model parameters obtained from the diagonal elements of the rCRLB matrix for: a) ${z_R}$ = 42 µm, µ = 0.72 mm−1, C = 2.5${\times} {10^4}$ and ${z_0}$ = [−0.2,0.9] mm ; b) ${z_0}$ = 160 µm inside the sample, µ = 0.72 mm−1, C = 2.5${\times} {10^4}$ and ${z_R}$ = [0.01,0.5] mm; c) ${z_R}$ = 42 µm, ${z_0}$ = 160 µm inside the sample, C = 2.5${\times} {10^4}$ and µ= [0.01,8] mm−1 ; d) ${z_R}$ = 42 µm, ${z_0}$ = 160 µm, µ = 0.72 mm−1 inside the sample and C = [${10^2}$,${10^6}$] (arb. units). The vertical dashed lines indicate the parameter values, which were set in the simulations and also were considered to be fixed for the other plots in this figure.
3.3 Estimation accuracy and precision: Monte Carlo simulation
To investigate if the theoretical lower bounds on the precision estimated by CRLB can be attained by our estimation method, Monte Carlo simulations were performed. In the simulated data, the location of the focus was varied between the surface and 0.6 mm inside the sample; the other model parameters were set to ${z_R}$ = 42 µm, C = 2.5${\times} $104 and µ = 0.72 mm−1. Next, we simulated 500 averaged A-lines distorted by Gaussian noise with an intensity-dependent standard deviation, as explained in section 3.1, for different focus locations. The method in section 2.1 was applied to estimate the model parameters using the fmincon function of MATLAB [Curve Fitting Toolbox, MATLAB 2013; The MathWorks, Natick, MA] using interior-point optimization with a termination tolerance set to ${10^{ - 15}}$, and the maximum number of iteration and function evaluations set to ${10^5}$.
Prior knowledge of the sample under investigation in combination with known properties of the optical system are useful to set suitable initial parameter values. The initial value of C (2.5${\times} $104 (arb. unit)) was set by choosing an arbitrary A-line and taking the average of the intensity values at all depths within the sample. To investigate the effect of the initial parameter values on the estimation results, the initial parameter values were varied individually, over the following ranges: 0.01 mm ≤$\; \; {z_R}\; $≤ 0.2 mm, 0 mm ≤ ${z_0} \le $ 2 mm, 103 ≤$\; C\; $≤7${\times} $104 and 0 mm−1≤ µ ≤ 6 mm−1, while the other parameters were set to the aforementioned initial parameter values. Figure 5 shows the CoV and bias of the estimated parameters for different settings of the initial values. As can be seen, the CoVs remain below 10% and bias error below 1%.
Fig. 5. The coefficient of variation (CoV) (left column) and bias of the estimated parameters (right column) using the proposed method obtained from 100 simulated OCT signals, when initial parameter values are set to: a-b) 0 mm−1 ≤ µ ≤ 6 mm−1, ${z_R} = \; $50 µm, C = 2${\times} $104 (arb. unit), and ${z_0} = 1\; \textrm{mm}$ ; c-d) $ 0\; \textrm{mm}\; \le \;{z_0} \le \;2\; \textrm{mm}$, µ = 1 mm−1, ${z_R} = \; $50 µm, and C = 2${\times} $104 (arb. unit); e-f) 0.01 mm ≤$\; {z_R}$ ≤ 0.2 mm, µ = 1 mm−1, C = 2${\times} $104 (arb. unit), and ${z_0} = 1$ mm, and g-h) 103 ≤ C ≤ 7${\times} $104 (arb. unit), µ = 1 mm−1, ${z_R} = \; $50 µm, and ${z_0} = 1$ mm. The simulated model parameters were set to ${z_R}$ = 42 µm, C = 2.5${\times} $104 (arb. unit), µ = 0.72 mm−1 and ${z_0} = 0.16\; \textrm{mm}$.
The coefficient of variation (CoV) of the estimated parameters in Monte Carlo simulation, the rCRLB values for different parameters and the estimation bias as a function of focus location, are shown in Fig. 6. For varying ${z_0}$, the initial values for the unknown parameters were set to ${z_R} = \; $50 µm, C = 2$\; \times $104 (arb. unit), µ = 1 mm−1 and the values of $\; {z_0}$ were set to 0.2 mm above the expected focus locations for each averaged A-line. We can observe in Fig. 6(a) that the estimation error of the parameters by Monte Carlo simulation is below 12% when the focus location is inside the sample. In addition, Fig. 6(b) shows an acceptable bias error in the Monte Carlo simulation.
Fig. 6. Monte Carlo simulation results: a) the coefficient of variation (CoV) of the estimated parameters (solid lines) for 500 measurements and Cramer-Rao lower bounds (dashed lines); and (b) bias of the estimated parameters as a function of focus location inside the medium.
To summarize the results, it has been shown that knowing more model parameters results in a better estimation precision of µ (Fig. 3). Additionally, the proposed approach ideally can estimate the model parameters with an acceptable precision (below 10%) when the number of averaged A-lines is larger than 500, the location of the focus is inside the sample, the Rayleigh length is below 0.4 mm and the attenuation coefficient of the sample is more than 0.2 mm−1 and less than 6 mm−1 (Fig. 4). Therefore, these limitations for Rayleigh length and the attenuation coefficient were considered in designing our experimental setup. In addition, we obtained acceptable results using interior-point solver. This routine was used for estimating the model parameters in the real measurements explained in the next section.
3.4 Experimental setup
In this section, we apply our method to the measurements obtained from different samples with an experimental OCT system. We apply them to estimate the model parameters based on either single or multiple B-scans. The B-scans of three thick or semi-infinite samples with 0.05 wt. %, 0.1 wt. % and 0.25 wt. % of TiO2 in silicone, with the zero delay location positioned 0.4 mm above the sample surface, are recorded with various locations of the focal plane from the samples' surfaces using a Ganymede-II-HR Thorlabs spectral domain OCT system (GAN905HV2-BU) [24]. The system has a centre wavelength of 900 nm and a bandwidth of 195 nm and a Thorlabs scan lens (LSM02-BB) with 18 mm focal length. The system's axial and lateral resolutions were 3 µm (in air) and 4 µm, respectively, and the axial and lateral physical pixel size in air was 1.2$7\; \times $ 2.9 µm with 1024 pixels on each A-line.
First, the focus position was manually set to the sample's surface by optimizing the surface structure's sharpness in the centre of enface image created by the OCT camera. Next, 90 B-scans were obtained at various locations of the focal plane by changing the axial location of the lens in the sample arm with a physical step size of 11.25 µm within a range of ${\pm} $ 0.5 mm around the initial focus location. Figure 7(a) shows a B-scan of 0.05 wt. % TiO2 in silicone with adjusted focus location at 0.26 mm from the focus location on the surface, as estimated by optimizing the sharpness. Figure 7(b) shows the concatenation of the averaged A-lines (from each B-scan) as a function of focus position. The location of the sample remained fixed in the B-scan for various focus positions by adjusting the optical path length of the system's reference arm. As can be seen in Fig. 7(b), the highest intensity on the surface deviates from the centre of the image, which indicates a shift in the aforementioned adjustment of the focus position on the surface.
Fig. 7. a) A B-scan of 0.05 wt. % TiO2 in siliconewith adjusted focus location at 0.26 mm from the adjusted focus on the surface; b) The averaged A-lines (from the aquired B-scans per focus position) as a function of focus position; c) Averaged OCT signals (circles) along data-points located at 63 µm inside the sample (dashed lines in (a)) with the best fitted focus model.
Several pre-processing steps have been performed on the measured spectra. First, the reference arm intensity, which was measured automatically by the system for every acquisition, was removed. Second, to compensate for the nonlinear spacing, the spectra were interpolated based on the reported wavelength distribution on the detector by the system manufacturer. Finally, the OCT signals were generated by squaring the Fourier transform of the interpolated signal. As mentioned in section 2.1, the averaged noise floor was obtained, by averaging over a large number of A-lines while having no sample in the sample arm, and subtracted from the A-lines. Afterwards, the roll-off of the system was measured and the A-lines were corrected for it. To estimate the roll-off, scans of 0.25 wt. % TiO2 for different locations of the surface within the OCT image depth range were obtained by changing the optical path length of the reference arm, and the sensitivity decay was obtained by smoothing and interpolating the average intensities of the corresponding region of interest in the scans. Finally, regions above and inside the sample that only contained noise were removed for each B-scan [7].
The system's Rayleigh length was estimated by the following procedure. For each B-scan obtained from the sample with 0.05 wt. % of TiO2 in silicone, the averaged A-line was calculated and the arbitrary data-point at the physical depth of 63 µm from the sample's surface were recorded. This physical depth should be close enough to the surface to obtain a sufficiently high SNR and far enough from the surface to avoid the data-points being affected by reflection artefact. The initial values of ${z_o}$ and ${z_R}$ were obtained by fitting the following model to the recorded data-points (Fig. 7(c)),
(4)$$f(z;{z_0},{z_R}) = \frac{C}{{{{\left( {\frac{{z - {z_0}}}{{2{z_R}}}} \right)}^2} + 1}}$$
where ${z_R}$ is the Rayleigh length which depends on the refractive index n of the medium [18]. In addition, the shifted focus positions were transformed from physical to optical distance. The optical Rayleigh lengths in air and silicone (with refractive index of ${n_{sample}} = \; $1.44) were estimated to be 29.3 µm and 60.8 µm (${z_R}_{Si} = {n_{Si}}^2.{z_R}_{air}$), respectively, shown in Fig. 7(c). The physical Rayleigh length in silicone was calculated to be 42 µm.
3.5 Estimating the model parameters in a single B-Scan
The unconstrained model of Eq. (1) was fit to the averaged A-lines obtained from 500 single A-lines of each B-scan. The initial values were set to $\mathrm{\mu} = \; $3 mm−1, C = 5 ${\times} $ 103, ${z_R}$ = 56 µm, and the values of $\; {z_0}$ were set to 0.2 mm above the expected focus locations for each Averaged A-line. ${z_0} =$ (the expected focus location $- $ 0.2 mm). The results of the fit to the measurements of 8 B-scans acquired with focus positions of {−0.5, −0.3, −0.1, 0.15, 0.3, 0.45, 0.6, 0.75} mm are shown in Fig. 8. The estimated parameter values for all recorded B-scans are shown in Fig. 9. Since parameters ${z_R}$ and µ are constant among the B-scans, we expect them to have similar estimated values. However, as can be seen, when the focus is above the sample, the estimated parameters are far from the expected values. Additionally, when the focus location is within the depth of 0.1 mm inside the sample, the estimated attenuation coefficients seems to be significantly different compare to their estimated values at larger depths. This also confirms the simulation results in section 3.1.
Fig. 8. The result of fitting the constrained model (blue) to the averaged A-lines per B-scan (red) obtained from the sample with 0.05 wt. % TiO2 in silicone for eight different focus positions {−0.5, −0.3, −0.1, 0.15, 0.3, 0.45, 0.6, 0.75} mm from left to right. The location of the estimated focus (within the shown depth range) is indicated by the vertical dashed line.
Fig. 9. The estimated model parameters as a function of focus position obtained averaged A-lines per B-scan acquired from the sample with 0.05 wt. % TiO2 in silicone. The vertical red dashed lines indicate the B-scan in which the focus was on the sample's surface.
For a location of focus inside the sample, the estimated focus locations closer to the surface are in better agreement with the expected focus locations than for focus positions very deep into the sample.
3.6 Estimating the model parameters using multiple B-Scans
Multiple B-scans of the same sample acquired with different focus positions can also be combined to estimate the model parameters as was explained in section 2.2. In this case, the model parameters $\mathrm{\mu} $ and ${z_R}$ were considered common while the focus position ${z_o}$ and C were left free to vary among the B-scans. Only the B-scans in which the focus was inside the sample were considered in this experiment. The initial parameter values of the fitting process were the same as the values mentioned in section 3.3. To investigate if the estimation result depends on the number of the B-scans used, different numbers of the B- scans (2, 4, 8, 16 and 32) acquired from the sample with 0.05 wt. % TiO2 in silicone were used. The estimated parameters µ$,\; {z_R}$ and ${z_0}$ are shown in Fig. 10 for different numbers of combined B-Scans. As can be seen, the estimated attenuation coefficient and Rayleigh length do not change significantly when more than 8 B-scans were used.
Fig. 10. Estimated model parameters: (a) zR and (b) µ, obtained from the averaged A-lines of 2, 4, 8 and 16 B-Scans acquired at different focus positions inside the sample with 0.05 wt. % TiO2 in silicone. The estimated and expected z0 are shown for the combinations of (c) 2, (d) 4, (e) 8, and (f) 16 B-Scans, i.e. focus locations.
We combined 8 B-scans in estimating the attenuation coefficient $\mathrm{\mu} $ for samples with different concentrations of TiO2 in silicone (0.05 wt. %, 0.1 wt. % and 0.25 wt. %). The resuts are shown in Table 2. The standard errors for the estimated attenuation coefficients, Rayleigh lengths and the focus locations were 0.01 mm−1, 0.0001 mm, and less than 0.01 mm, respectively. In theory, the relationship between the particle concentration and the attenuation coefficient should be linear. The estimation result shows a reasonable correlation between the TiO2 weight concentration and the estimated attenuation coefficient [with R2 = 0.99 calculated over all B-scans].
Table 2. Estimated attenuation coefficients obtained using 8 B-scans acquired with different focus positions inside the sample for three phantoms with different TiO2 weight concentrations in Silicone.
We presented a method to estimate the sample attenuation coefficients from OCT measurements compensated for the effects of the axial PSF. The recorded signals were modeled by assuming single-scattering in a homogeneous sample accounting for the system's roll-off, noise and focused beam shape (axial PSF). The model parameters of the focused axial PSF were estimated from experimental OCT data. Our goal was to achieve accurate estimation of the attenuation coefficient to enable reliable quantitative analysis of a sample under investigation. The numerical study predicted the performance, and hence the limitations, of our model for different experimental conditions. We observed that for a Rayleigh length smaller than 0.3 mm the estimation error of attenuation coefficient is smaller than 10% for a sample with attenuation coefficient of 0.73 mm−1 and with the location of the focused beam inside the sample. The signal decay caused by the effect of axial PSF for the location of the focus inside and close to the sample's surface for larger Rayleigh lengths is not as significant as for smaller Rayleigh lengths. Accurate estimation of the attenuation coefficient for the location of focus above the sample is not feasible since it is nearly impossible to distinguish between the decay of the recorded signal caused by attenuation and by the axial PSF.
In experiments with phantoms of different weight concentrations TiO2 in silicone, good fits of the model to the measurements using a single B-scan as well as multiple B-scans acquired for different focus positions were obtained. In the latter case the parameters related to sample and optics were shared, whereas the focus position and signal strength were allowed to vary among the different B-scans. Only for focus positions very deep into the sample, the fitted model parameters started to deviate from the measurements closer to the sample's surface. This might be due to the background noise subtraction or an incorrect estimation of roll-off.
The estimated attenuation coefficients using single B-scans are varying among the B-scans and tend to decrease when the focus is shifting to larger depths. For the focus locations above the sample and close to the surface inside the sample, the estimated attenuation coefficients are significantly different compared to the estimated values at larger depths. This confirms the numerical results depicted in Fig. 4(a). We observed that the incorrect estimation of model parameter C can significantly influence the estimation of the attenuation coefficients. Therefore, finding a method to fix this parameter would significantly improve the results of estimating the attenuation coefficient from a single B-scan.
We also combined multiple B-scans acquired at different focus positions to estimate the model parameters. The estimated Rayleigh length and locations of focus in different B-scans could be estimated with less than 3% error while using 8 B-scans. Using more B-scans does not show a significant improvement in estimating the model parameters. To be able to compare the numerical and experimental results properly, the true attenuation coefficient values of the samples should be known. While these attenuation coefficients are unknown, we do know the concentration of TiO2. We showed that in samples with different concentrations of TiO2 dispersed in silicone there is a linear relation between the 0.05 wt. % and 0.1 wt. % TiO2 weight concentration and the estimated attenuation coefficients. However, the estimated attenuation coefficient for 0.25 wt. % TiO2 is slightly lower than the expected value. It has been shown previously that the measured attenuation coefficient falls below the expected values for increasing particles concentration due to an increase of the amount of multiple scattering [25]. Applying a multiple-scattering model can result in a better correlation between the measurements and the OCT light model.
A limitation of this work is the assumption of isotropic scattering and weak concentrations of scatterers such that a single-scattering model suffices.
In ophthalmology we encounter a shift of the focal plane due to accommodation of the human eye and movements of the eye and the head. The method can also be applied to data in which the focus position remains fixed for all B-scans. In addition, in ophthalmology, the recorded OCT scans obtained from a clinical OCT system are usually averaged over 10-100 B-scans during acquisition to improve the image quality of the scans. To obtain the precision explained in this research (by averaging over 500 A-lines), we only need to average over a small area in the image (5-50 neighboring A-lines). To obtain a higher spatial resolution, it is advised to reduce the number of averaging among neighboring A-lines while increasing the averaging rate during acquisition. In future work, we will extend the proposed method to estimate the attenuation coefficient values of a multi-layer sample from the OCT images. This is especially relevant in applications such as ophthalmology where each retina layer has different optical properties.
ZonMw (91212061).
This research was funded by the Netherlands Organization for Health Research and Development (ZonMw) TOP grant (91212061). We would like to acknowledge Dr. Dirk H. J. Poot (Biomedical Imaging Group Rotterdam, departments of Medical Informatics and Radiology, Erasmus Medical Center Rotterdam, Rotterdam, The Netherlands) for his technical support in this work.
1. K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, "RPE-normalized RNFL attenuation coefficient maps derived from volumetric OCT imaging for glaucoma assessment," Invest. Ophthalmol. Visual Sci. 53(10), 6102–6108 (2012). [CrossRef]
2. R. A. McLaughlin, L. Scolaro, P. Robbins, C. Saunders, S. L. Jacques, and D. D. Sampson, "Parametric imaging of cancer with optical coherence tomography," J. Biomed. Opt. 15(4), 046029 (2010). [CrossRef]
3. K. Barwari, D. M. de Bruin, E. C. Cauberg, D. J. Faber, T. G. van Leeuwen, H. Wijkstra, J. de la Rosette, and M. P. Laguna, "Advanced diagnostics in renal mass using optical coherence tomography: a preliminary report," J. Endourol. 25(2), 311–315 (2011). [CrossRef]
4. K. Barwari, D. M. de Bruin, D. J. Faber, T. G. van Leeuwen, J. J. de la Rosette, and M. P. Laguna, "Differentiation between normal renal tissue and renal tumors using functional optical coherence tomography: a phase I in vivo human study," BJU Int. 110(8b), E415–E420 (2012). [CrossRef]
5. P. H. Tomlins, O. Adegun, E. Hagi-Pavli, K. Piper, D. Bader, and F. Fortune, "Scattering attenuation microscopy of oral epithelial dysplasia," J. Biomed. Opt. 15(6), 066003 (2010). [CrossRef]
6. Q. Q. Zhang, X. J. Wu, T. Tang, S. W. Zhu, Q. Yao, B. Z. Gao, and X. C. Yuan, "Quantitative analysis of rectal cancer by spectral domain optical coherence tomography," Phys. Med. Biol. 57(16), 5235–5244 (2012). [CrossRef]
7. F. J. van der Meer, D. J. Faber, D. M. B. Sassoon, M. C. Aalders, G. Pasterkamp, and T. G. van Leeuwen, "Localized measurement of optical attenuation coefficients of atherosclerotic plaque constituents by quantitative optical coherence tomography," IEEE Trans. Med. Imag. 24(10), 1369–1376 (2005). [CrossRef]
8. C. Xu, J. M. Schmitt, S. G. Carlier, and R. Virmani, "Characterization of atherosclerosis plaques by measuring both backscattering and attenuation coefficients in optical coherence tomography," J. Biomed. Opt. 13(3), 034003 (2008). [CrossRef]
9. G. van Soest, T. Goderie, E. Regar, S. Koljenovic, G. L. J. H. van Leenders, N. Gonzalo, S. van Noorden, T. Oka-mura, B. E. Bouma, G. J. Tearney, J. W. Oosterhuis, P. W. Serruys, and A. F. W. van der Steen, "Atherosclerotic tissue characterization in vivo by optical coherence tomography attenuation imaging," J. Biomed. Opt. 15(1), 011105 (2010). [CrossRef]
10. J. M. Schmitt, A. Knüttel, M. Yadlowsky, and M. A. Eckhaus, "Optical-coherence tomography of a dense tissue: statistics of attenuation and backscattering," Phys. Med. Biol. 39(10), 1705–1720 (1994). [CrossRef]
11. R. O. Esenaliev, K. V. Larin, I. V. Larina, and M. Motamedi, "Non-invasive monitoring of glucose concentration with optical coherence tomography," Opt. Lett. 26(13), 992–994 (2001). [CrossRef]
12. A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, "Precision of Measurement of Tissue Optical Properties with Optical Coherence Tomography," Appl. Opt. 42(16), 3027–3037 (2003). [CrossRef]
13. L. Thrane, H. T. Yura, and P. E. Andersen, "Analysis of optical coherence tomography systems based on the extended Huygens Fresnel principle," J. Opt. Soc. Am. A 17(3), 484–490 (2000). [CrossRef]
14. V. D. Nguyen, D. J. Faber, E. van der Pol, T. G. van Leeuwen, and J. Kalkman, "Dependent and multiple scattering in transmission and backscattering optical coherence tomography," Opt. Express 21(24), 29145–29156 (2013). [CrossRef]
15. K. A. Vermeer, J. Mo, J. J. Weda, H. G. Lemij, and J. F. de Boer, "Depth-resolved model-based reconstruction of attenuation coefficients in optical coherence tomography," Biomed. Opt. Express 5(1), 322–337 (2014). [CrossRef]
16. B. Ghafaryasl, K. A. Vermeer, J. F. de Boer, M. E. J. van Velthoven, and L. J. van Vliet, "Noise-adaptive attenuation coefficient estimation in spectral domain optical coherence tomography data," in Proceedings of IEEE 13th International Symposium on Biomedical Imaging (ISBI), 706–709 (2016).
17. S. H. Yun, G. J. Tearney, B. E. Bouma, B. H. Park, and J. F. de Boer, "High-speed spectral-domain optical coherence tomography at 1.3 mu m wavelength," Opt. Express 11(26), 3598–3604 (2003). [CrossRef]
18. T. G. van Leeuwen, D. J. Faber, and M. C. Aalders, "Measurement of the axial point spread function in scattering media using single-mode fiber-based optical coherence tomography," IEEE J. Select. Topics Quantum Electron. 9(2), 227–233 (2003). [CrossRef]
19. B. Ghafaryasl, K. A. Vermeer, J. Kalkman, T. Callewaert, J. F. de Boer, and L. J. van Vliet, "Accurate estimation of the attenuation coefficient from axial point spread function corrected OCT scans of a single layer phantom," Proc. SPIE 10483, 104832B (2018). [CrossRef]
20. G. T. Smith, N. Dwork, D. O'Connor, U. Sikora, K. L. Lurie, J. M. Pauly, and A. K. Ellerbee, "Automated, depth-resolved estimation of the attenuation coefficient from optical coherence tomography data," IEEE Trans. Med. Imaging 34(12), 2592–2602 (2015). [CrossRef]
21. S. Stefan, K. S. Jeong, C. Polucha, N. Tapinos, S. A. Toms, and J. Lee, "Determination of confocal profile and curved focal plane for OCT mapping of the attenuation coefficient," Biomed. Opt. Express 9(10), 5084–5099 (2018). [CrossRef]
22. D. J. Faber, F. J. van der Meer, M. C. G. Aalders, and T. G. van Leeuwen, "Quantitative measurement of attenuation coefficients of weakly scattering media using optical coherence tomography," Opt. Express 12(19), 4353–4365 (2004). [CrossRef]
23. M. W. A. Caan, H. G. Khedoe, D. H. J. Poot, A. J. den Dekker, S. D. Olabarriaga, K. A. Grimbergen, L. J. van Vliet, and F. M. Vos, "Estimation of diffusion properties in crossing fiber bundles," IEEE Trans. Med. Imaging 29(8), 1504–1515 (2010). [CrossRef]
24. T. Callewaert, J. Dik, and J. Kalkman, "Segmentation of thin corrugated layers in high-resolution OCT images," Opt. Express 25(26), 32816–32828 (2017). [CrossRef]
25. J. Kalkman, A. V. Bykov, D. J. Faber, and T. G. van Leeuwen, "Multiple and dependent scattering effects in Doppler optical coherence tomography," Opt. Express 18(4), 3883–3892 (2010). [CrossRef]
Article Order
K. A. Vermeer, J. van der Schoot, H. G. Lemij, and J. F. de Boer, "RPE-normalized RNFL attenuation coefficient maps derived from volumetric OCT imaging for glaucoma assessment," Invest. Ophthalmol. Visual Sci. 53(10), 6102–6108 (2012).
[Crossref]
R. A. McLaughlin, L. Scolaro, P. Robbins, C. Saunders, S. L. Jacques, and D. D. Sampson, "Parametric imaging of cancer with optical coherence tomography," J. Biomed. Opt. 15(4), 046029 (2010).
K. Barwari, D. M. de Bruin, E. C. Cauberg, D. J. Faber, T. G. van Leeuwen, H. Wijkstra, J. de la Rosette, and M. P. Laguna, "Advanced diagnostics in renal mass using optical coherence tomography: a preliminary report," J. Endourol. 25(2), 311–315 (2011).
K. Barwari, D. M. de Bruin, D. J. Faber, T. G. van Leeuwen, J. J. de la Rosette, and M. P. Laguna, "Differentiation between normal renal tissue and renal tumors using functional optical coherence tomography: a phase I in vivo human study," BJU Int. 110(8b), E415–E420 (2012).
P. H. Tomlins, O. Adegun, E. Hagi-Pavli, K. Piper, D. Bader, and F. Fortune, "Scattering attenuation microscopy of oral epithelial dysplasia," J. Biomed. Opt. 15(6), 066003 (2010).
Q. Q. Zhang, X. J. Wu, T. Tang, S. W. Zhu, Q. Yao, B. Z. Gao, and X. C. Yuan, "Quantitative analysis of rectal cancer by spectral domain optical coherence tomography," Phys. Med. Biol. 57(16), 5235–5244 (2012).
F. J. van der Meer, D. J. Faber, D. M. B. Sassoon, M. C. Aalders, G. Pasterkamp, and T. G. van Leeuwen, "Localized measurement of optical attenuation coefficients of atherosclerotic plaque constituents by quantitative optical coherence tomography," IEEE Trans. Med. Imag. 24(10), 1369–1376 (2005).
C. Xu, J. M. Schmitt, S. G. Carlier, and R. Virmani, "Characterization of atherosclerosis plaques by measuring both backscattering and attenuation coefficients in optical coherence tomography," J. Biomed. Opt. 13(3), 034003 (2008).
G. van Soest, T. Goderie, E. Regar, S. Koljenovic, G. L. J. H. van Leenders, N. Gonzalo, S. van Noorden, T. Oka-mura, B. E. Bouma, G. J. Tearney, J. W. Oosterhuis, P. W. Serruys, and A. F. W. van der Steen, "Atherosclerotic tissue characterization in vivo by optical coherence tomography attenuation imaging," J. Biomed. Opt. 15(1), 011105 (2010).
J. M. Schmitt, A. Knüttel, M. Yadlowsky, and M. A. Eckhaus, "Optical-coherence tomography of a dense tissue: statistics of attenuation and backscattering," Phys. Med. Biol. 39(10), 1705–1720 (1994).
R. O. Esenaliev, K. V. Larin, I. V. Larina, and M. Motamedi, "Non-invasive monitoring of glucose concentration with optical coherence tomography," Opt. Lett. 26(13), 992–994 (2001).
A. I. Kholodnykh, I. Y. Petrova, K. V. Larin, M. Motamedi, and R. O. Esenaliev, "Precision of Measurement of Tissue Optical Properties with Optical Coherence Tomography," Appl. Opt. 42(16), 3027–3037 (2003).
L. Thrane, H. T. Yura, and P. E. Andersen, "Analysis of optical coherence tomography systems based on the extended Huygens Fresnel principle," J. Opt. Soc. Am. A 17(3), 484–490 (2000).
V. D. Nguyen, D. J. Faber, E. van der Pol, T. G. van Leeuwen, and J. Kalkman, "Dependent and multiple scattering in transmission and backscattering optical coherence tomography," Opt. Express 21(24), 29145–29156 (2013).
K. A. Vermeer, J. Mo, J. J. Weda, H. G. Lemij, and J. F. de Boer, "Depth-resolved model-based reconstruction of attenuation coefficients in optical coherence tomography," Biomed. Opt. Express 5(1), 322–337 (2014).
B. Ghafaryasl, K. A. Vermeer, J. F. de Boer, M. E. J. van Velthoven, and L. J. van Vliet, "Noise-adaptive attenuation coefficient estimation in spectral domain optical coherence tomography data," in Proceedings of IEEE 13th International Symposium on Biomedical Imaging (ISBI), 706–709 (2016).
S. H. Yun, G. J. Tearney, B. E. Bouma, B. H. Park, and J. F. de Boer, "High-speed spectral-domain optical coherence tomography at 1.3 mu m wavelength," Opt. Express 11(26), 3598–3604 (2003).
T. G. van Leeuwen, D. J. Faber, and M. C. Aalders, "Measurement of the axial point spread function in scattering media using single-mode fiber-based optical coherence tomography," IEEE J. Select. Topics Quantum Electron. 9(2), 227–233 (2003).
B. Ghafaryasl, K. A. Vermeer, J. Kalkman, T. Callewaert, J. F. de Boer, and L. J. van Vliet, "Accurate estimation of the attenuation coefficient from axial point spread function corrected OCT scans of a single layer phantom," Proc. SPIE 10483, 104832B (2018).
G. T. Smith, N. Dwork, D. O'Connor, U. Sikora, K. L. Lurie, J. M. Pauly, and A. K. Ellerbee, "Automated, depth-resolved estimation of the attenuation coefficient from optical coherence tomography data," IEEE Trans. Med. Imaging 34(12), 2592–2602 (2015).
S. Stefan, K. S. Jeong, C. Polucha, N. Tapinos, S. A. Toms, and J. Lee, "Determination of confocal profile and curved focal plane for OCT mapping of the attenuation coefficient," Biomed. Opt. Express 9(10), 5084–5099 (2018).
D. J. Faber, F. J. van der Meer, M. C. G. Aalders, and T. G. van Leeuwen, "Quantitative measurement of attenuation coefficients of weakly scattering media using optical coherence tomography," Opt. Express 12(19), 4353–4365 (2004).
M. W. A. Caan, H. G. Khedoe, D. H. J. Poot, A. J. den Dekker, S. D. Olabarriaga, K. A. Grimbergen, L. J. van Vliet, and F. M. Vos, "Estimation of diffusion properties in crossing fiber bundles," IEEE Trans. Med. Imaging 29(8), 1504–1515 (2010).
T. Callewaert, J. Dik, and J. Kalkman, "Segmentation of thin corrugated layers in high-resolution OCT images," Opt. Express 25(26), 32816–32828 (2017).
J. Kalkman, A. V. Bykov, D. J. Faber, and T. G. van Leeuwen, "Multiple and dependent scattering effects in Doppler optical coherence tomography," Opt. Express 18(4), 3883–3892 (2010).
Aalders, M. C.
Aalders, M. C. G.
Adegun, O.
Andersen, P. E.
Bader, D.
Barwari, K.
Bouma, B. E.
Bykov, A. V.
Caan, M. W. A.
Callewaert, T.
Carlier, S. G.
Cauberg, E. C.
de Boer, J. F.
de Bruin, D. M.
de la Rosette, J.
de la Rosette, J. J.
den Dekker, A. J.
Dik, J.
Dwork, N.
Eckhaus, M. A.
Ellerbee, A. K.
Esenaliev, R. O.
Faber, D. J.
Fortune, F.
Gao, B. Z.
Ghafaryasl, B.
Goderie, T.
Gonzalo, N.
Grimbergen, K. A.
Hagi-Pavli, E.
Jacques, S. L.
Jeong, K. S.
Kalkman, J.
Khedoe, H. G.
Kholodnykh, A. I.
Knüttel, A.
Koljenovic, S.
Laguna, M. P.
Larin, K. V.
Larina, I. V.
Lemij, H. G.
Lurie, K. L.
McLaughlin, R. A.
Mo, J.
Motamedi, M.
Nguyen, V. D.
O'Connor, D.
Oka-mura, T.
Olabarriaga, S. D.
Oosterhuis, J. W.
Park, B. H.
Pasterkamp, G.
Pauly, J. M.
Petrova, I. Y.
Piper, K.
Polucha, C.
Poot, D. H. J.
Regar, E.
Robbins, P.
Sampson, D. D.
Sassoon, D. M. B.
Saunders, C.
Schmitt, J. M.
Scolaro, L.
Serruys, P. W.
Sikora, U.
Smith, G. T.
Stefan, S.
Tang, T.
Tapinos, N.
Tearney, G. J.
Thrane, L.
Tomlins, P. H.
Toms, S. A.
van der Meer, F. J.
van der Pol, E.
van der Schoot, J.
van der Steen, A. F. W.
van Leenders, G. L. J. H.
van Leeuwen, T. G.
van Noorden, S.
van Soest, G.
van Velthoven, M. E. J.
van Vliet, L. J.
Vermeer, K. A.
Virmani, R.
Vos, F. M.
Weda, J. J.
Wijkstra, H.
Wu, X. J.
Xu, C.
Yadlowsky, M.
Yao, Q.
Yuan, X. C.
Yun, S. H.
Yura, H. T.
Zhang, Q. Q.
Zhu, S. W.
Appl. Opt. (1)
Biomed. Opt. Express (2)
BJU Int. (1)
IEEE J. Select. Topics Quantum Electron. (1)
IEEE Trans. Med. Imag. (1)
IEEE Trans. Med. Imaging (2)
Invest. Ophthalmol. Visual Sci. (1)
J. Biomed. Opt. (4)
J. Endourol. (1)
J. Opt. Soc. Am. A (1)
Opt. Express (5)
Opt. Lett. (1)
Phys. Med. Biol. (2)
Proc. SPIE (1)
OSA participates in Crossref's Cited-By Linking service. Citing articles from OSA journals and other participating publishers are listed here.
Alert me when this article is cited.
Click here to see a list of articles that cite this paper
Fig. 10.
View in Article | Download Full Size | PPT Slide | PDF
Equations on this page are rendered with MathJax. Learn more.
(1) S ( z ) = R ( z ) 1 ( z − z 0 2 z R ) 2 + 1 C e − 2 μ z + N ( z ) + ε ( z ) ,
(2) χ = ∑ j = 1 N D [ A ( z j ) − C e − 2 μ z j ( z j − z 0 2 z R ) 2 + 1 ] 2 ,
(3) χ = ∑ i = 1 N A ∑ j = 1 N D [ A i ( z j ) − C e − 2 μ z j ( z j − z 0 2 z R ) 2 + 1 ] 2
(4) f ( z ; z 0 , z R ) = C ( z − z 0 2 z R ) 2 + 1
Overview of different OCT signal models with different degrees of freedom (DOF). The unknown independent parameters among the averaged A-lines which need to be estimated are indicted by "Indep." and the fixed (known) or common (joint) parameters are shown by "Fix/Com." in the table.
C (arb. units) Fix/Com. Fix/Com. Indep. Fix/Com. Indep. Indep. Indep.
µ (mm−1) Indep. Indep. Indep. Indep. Indep. Indep. Indep.
z0 (µm) Fix/Com. Indep. Fix/Com. Indep. Fix/Com. Indep. Indep.
zR (µm) Indep. Fix/Com. Fix/Com. Indep. Indep. Fix/Com. Indep.
DOF 2 2 2 3 3 3 4
Estimated attenuation coefficients obtained using 8 B-scans acquired with different focus positions inside the sample for three phantoms with different TiO2 weight concentrations in Silicone.
TiO2 conc. (wt. %) 0.05 0.1 0.25
Estimated µ (mm−1) 1.0 2.1 4.4
|
CommonCrawl
|
Algebra|Coalgebra Seminar
[News & Future Meetings] [Previous Meetings] [Contact]
This page concerns the Algebra|Coalgebra Seminar at the Institute for Logic, Language & Computation (ILLC) of the University of Amsterdam. The purpose of the seminar is to disseminate results and insights about and around algebraic and coalgebraic methods in logic .
News & Future meetings
The slides of the previous talk, by Jason Parker, can be found here. Those interested in reading more can consult Jason's PhD Thesis or this journal article.
Speaker: Alexandra Silva (University College London)
Date: Wednesday, 3 February, 2021.
Time: 16:00 (CET)
Location: Online (Zoom Meeting ID 922-5064-0302)
[Title & abstract TBA]
Title: Strong Completeness in Topological Semantics
Speaker: Philip Kremer (University of Toronto)
Date: Wednesday, 17 February, 2021.
[abstract]
In the topological semantics for modal logic, S4 is well-known to be complete for the rational line, for the real line, and for Cantor space: these are special cases of S4's completeness for any dense-in-itself metric space. The construction used to prove completeness can be slightly amended to show that S4 is not only complete, but also strongly complete, for the rational line. But no similarly easy amendment is available for the real line or for Cantor space. In 2013, I proved that S4 is strongly complete for any dense-in-itself metric space, taking a detour through algebraic semantics. In this talk, I will review that proof.
Speaker: Daniyar Shamkanov (Steklov Mathematical Institute)
Date: Wednesday, 3 March, 2021.
Speaker: Corina Cirstea (University of Southampton)
Date: Wednesday, 31 March, 2021.
Speaker: Matteo Mio (ENS-Lyon)
Date: Wednesday, 14 April, 2021.
Speaker: Jurriaan Rot (Radboud University)
Speaker: Luigi Santocanale (Aix-Marseille Université)
Date: Wednesday, 26 May, 2021.
Title: Isotropy Groups of Quasi-Equational Theories (Slides, PhD Thesis, Journal article)
Speaker: Jason Parker (Brandon University)
Date: Wednesday, 6 January, 2021.
In [2], my PhD supervisors (Pieter Hofstra and Philip Scott) and I studied the new topos-theoretic phenomenon of isotropy (as introduced in [1]) in the context of single-sorted algebraic theories, and we gave a logical/syntactic characterization of the isotropy group of any such theory, thereby showing that it encodes a notion of inner automorphism or conjugation for the theory. In the present talk, I will summarize the results of my recent PhD thesis, in which I build on this earlier work by studying the isotropy groups of (multi-sorted) quasi-equational theories (also known as essentially algebraic, cartesian, or finite limit theories). In particular, I will show how to give a logical/syntactic characterization of the isotropy group of any such theory, and that it encodes a notion of inner automorphism or conjugation for the theory. I will also describe how I have used this characterization to exactly characterize the 'inner automorphisms' for several different examples of quasi-equational theories, most notably the theory of strict monoidal categories and the theory of presheaves valued in a category of models. In particular, the latter example provides a characterization of the (covariant) isotropy group of a category of set-valued presheaves, which had been an open question in the theory of categorical isotropy.
[1] J. Funk, P. Hofstra, B. Steinberg. Isotropy and crossed toposes. Theory and Applications of Categories 26, 660-709, 2012.
[2] P. Hofstra, J. Parker, P.J. Scott. Isotropy of algebraic theories. Electronic Notes in Theoretical Computer Science 341, 201-217, 2018.
Title: The Univalence Principle
Speaker: Paige Randall North (Ohio State University)
Date: Wednesday, 2 December, 2020.
Time: 15:00 (CET) Note: this seminar will be held one hour earlier than usual.
The Equivalence Principle is an informal principle asserting that equivalent mathematical objects have the same properties. In Univalent Foundations -- a foundation of mathematics based on dependent type theory -- this principle has been formally stated and proven for various sorts of structured sets (such as monoids, groups, etc) and for categories. In fact, this follows from a similar principle that has been formally stated and proven in these cases: the Univalence Principle. This says that any two equivalent mathematical objects are equal. In this talk, I will describe a Univalence Principle that holds for more sorts of mathematical objects and, in particular, many higher-categorical structures. This is joint work with Benedikt Ahrens, Mike Shulman, and Dimitris Tsementzis.
Title: Effective Kan fibrations in simplicial sets
Speaker: Benno van den Berg (ILLC)
Date: Wednesday, 4 November, 2020.
Homotopy type theory started in earnest when Voevodsky formulated his univalence axiom and showed that the category of simplicial sets hosts a model of type theory in which this axiom holds. After he made these contributions, people wondered to which extent this model can be developed in a constructive metatheory and can be exploited for computational purposes. Some serious obstacles were found and in response many researchers switched to cubical sets and started working on "cubical type theories". In joint work with Eric Faber I have developed an approach which seeks to circumvent the obstructions for developing a simplicial sets model for homotopy type theory in a constructive fashion. In this talk I will try to give some idea of how we try to do this, what we have achieved thus far and how our project compares to related ones by Gambino, Henry, Szumilo and Sattler.
Title: A Functional (Monadic) Second-Order Theory of Infinite Trees
Speaker: Colin Riba (ENS de Lyon)
Date: Wednesday, 21 October, 2020.
Time: 16:00 (CEST)
We present a complete axiomatization of Monadic Second-Order Logic (MSO) over infinite trees. By a complete axiomatization we mean a complete deduction system with a polynomial-time recognizable set of axioms. By naive enumeration of formal derivations, this formally gives a proof of Rabin's Tree Theorem (the decidability of MSO over infinite trees). The deduction system consists of the usual rules for second-order logic seen as two-sorted first-order logic, together with the natural adaptation to infinite trees of the axioms of MSO on ω-words. In addition, it contains an axiom scheme expressing the (positional) determinacy of certain parity games.
The main difficulty resides in the limited expressive power of the language of MSO. We actually devise an extension of MSO, called Functional (Monadic) Second-Order Logic (FSO), which allows us to uniformly manipulate (hereditarily) finite sets and corresponding labeled trees, and whose language allows for higher abstraction than that of MSO.
Title: Guarded Kleene Algebra with Tests
Speaker: Tobias Kappé (Cornell University)
Date: Wednesday, 7 October, 2020.
A Guarded Kleene Algebra with Tests (GKAT) is a type of Kleene Algebra with Tests (KAT) that arises by restricting the operators of KAT to "deterministic" (predicate-guarded) versions. This makes GKAT especially suited to reason about flow control in imperative programs. In contrast with KAT, where the equivalence problem is PSPACE-complete, we show that equivalence in GKAT can be decided in almost linear time. We also provide a full Kleene theorem and prove completeness w.r.t. a Salomaa-style axiomatization, both of which require us to develop a coalgebraic theory of GKAT.
A|C seminar in 2019/2020
Title: Modal Intuitionistic Logics as Dialgebraic Logics
Speaker: Jim de Groot (Australian National University)
Date: Wednesday, 8 July, 2020.
Time: 11:00 (CEST) Note: this seminar will be held at a unusual time.
In a recent LICS paper, Dirk Pattinson and I generalised the paradigm of coalgebraic logic to what we call dialgebraic logic. The main motivation behind this was to provide a coalgebra-like framework for modal intuitionistic logics. In this talk, I will explain the main ideas and constructions from the paper.
Guided by an easy example of a modal intuitionistic logic we will see where the coalgebraic approach breaks down. We then use the notion of a dialgebra to fix this problem. This leads to a general (dialgebraic) framework, where modal logics can be given by predicate liftings and axioms, in a similar fashion as for Set-coalgebras. Finally, we investigate a categorical notion of completeness, obtain a general completeness result, and instantiate this to our example.
Title: Choice-Free Duality for Orthocomplemented Lattices
Speakers: Joseph McDonald (ILLC) & Kentarô Yamamoto (UC Berkeley)
Time: 17:00 (CEST) Note: this seminar starts one hour later than usual.
The aim of this talk is to outline our ongoing research in which we are developing a choice-free topological duality for orthocomplemented lattices (or simply, ortholattices) by means of a special subclass of spectral spaces, which we call upper Vietoris orthospaces (or simply, UVO-spaces). Our techniques combine the choice-free spectral space approach to the topological duality theory of Boolean algebras developed by Bezhanishvili and Holliday with the choice-dependent Stone space approach to the topological duality theory of ortholattices originally developed by Goldblatt, and then later by Bimbo. In light of this duality, we will discuss the "duality dictionary" we are currently developing, which makes explicit the translation between ortholattice and UVO-space concepts. Lastly, we discuss how our duality relates to other dualities and what applications we hope to explore.
Title: From choice-free Stone duality to choice-free model theory?
Speaker: Wesley Holliday (UC Berkeley)
Date: Wednesday, 24 June, 2020.
In a recent paper, "Choice-free Stone duality" (JSL, March 2020), Nick Bezhanishvili and I developed a choice-free duality theory for Boolean algebras using special spectral spaces, called upper Vietoris spaces (UV-spaces). In this talk, I will cover the basics of this duality and explain how it leads to "possibility semantics" for classical first-order logic. While the completeness theorem for first-order logic for uncountable languages with respect to traditional Tarskian models requires a nonconstructive choice principle unprovable in ZF set theory, I will sketch a choice-free proof of the completeness theorem for first-order logic for arbitrary languages with respect to possibility models. This result raises the prospect of a program of "choice-free model theory."
Title: Slanted Canonicity of Analytic Inductive Inequalities
Speaker: Laurent De Rudder (Université de Liège)
In this talk, we broach an algebraic canonicity theorem for normal LE-logics of arbitrary signature in the generalized setting of slanted algebras, i.e. lattices expansions in which the non-lattice operations map tuples of elements of the given lattice to closed or open elements of its canonical extension. Interestingly, the syntactic shape of LE-inequalities which guarantees canonicity in this generalized setting turns out to coincide with the shape of analytic inductive inequalities, which guarantees LE-inequalities to be equivalently captured by analytic structural rules of a proper display calculus.
Title: Temporal interpretation of intuitionistic quantifiers (Joint work with G. Bezhanishvili)
Speaker: Luca Carai (New Mexico State University)
In this talk we discuss how to provide a temporal interpretation of intuitionistic quantifiers: the universal quantifier is interpreted as "for every object in the future" and the existential quantifier as "for some object in the past". One of the merits of this perspective is overcoming the non-symmetry of the interpretation of quantifiers in the usual Kripke semantics for the predicate intuitionistic logic IQC.
Our goal is to adapt the predicate Gödel translation to a translation of IQC into a suitable tense logic. Because of issues related to constant domains, we cannot choose the predicate tense S4 system QS4.t as the target of the translation. Following the work of Corsi, we introduce the tense logic Q°S4.t obtained by weakening the axioms of QS4.t. We then supply a full and faithful translation of IQC into Q°S4.t whose faithfulness is shown using syntactic methods and its fullness is proved utilizing the generalized Kripke semantics of Corsi.
Title: Separable MV-algebras (Joint work with V. Marra)
Speaker: Matías Menni (Conicet and Universidad Nacional de La Plata)
Date: Wednesday, 3 June, 2020.
We will recall the definition of decidable object in an extensive category and characterize the decidable objects in the opposite of the category of MV-algebras.
Title: Are locally finite MV-algebras a variety? (Joint work with M. Abbadini)
Speaker: Luca Spada (Università degli Studi di Salerno)
[slides]
We answer Mundici's problem number 3 (Advances Lukasiewicz calculus, page 235): Is the category of locally finite MV-algebras equivalent to an equational class? We prove:
The category of locally finite MV-algebras is not equivalent to any finitary variety of algebras. More is true: the category of locally finite MV-algebras is not equivalent to any finitely-sorted finitary quasi-variety of algebras.
The category of locally finite MV-algebras is equivalent to a variety of infinitary algebras (with operations of countable arity).
The category of locally finite MV-algebras is equivalent to a countably-sorted variety of finitary algebras.
Our proofs rest upon the duality between locally finite MV-algebras and the category of multisets by Cignoli, Dubuc and Mundici, and categorical characterisations of varieties and quasi-varieties proved by Lawvere, Duskin and many others.
Title: Describable Nuclea, Negative Translations and Extension Stability
Speaker: Tadeusz Litak (FAU Erlangen-Nürnberg)
Date: Tuesday, 12 May, 2020.
Location: Online (see the TULIPS webpage note: pre-registration required)
What do soundness/completeness of negative translations of intutionistic modal logics, extension stability of preservativity/provability logics and the use of nuclea on Heyting Algebra Expansions (HAEs) to form other HAEs have in common? As it turns out, in all those cases one appears to deal with a certain kind of subframe property for a given logic, i.e., the question whether the logic in question survives a transition to at least some types of substructures of its Kripke-style or neighourhood-style frames. The nucleic perspective on subframe logics has been introduced by Silvio Ghilardi and Guram Bezhanishvili (APAL 2007) for the purely superintuitionistic syntax (without modalities or other additional connectives). It has not been used so much in the modal setting, mostly because the focus in the field tends to be on modal logics with classical propositional base, and nuclea are a rather trivial notion in the boolean setting. However, things are very different intuitionistically. Since the 1970's, nuclea have been studied in the context of point-free topologies (lattice-complete Heyting algebras), sheaves and Grothendieck topologies on toposes, and finally arbitrary Heyting algebras (Macnab 1981). Other communities may know them as "lax modalities" or (a somewhat degenerate case of) "strong monads".
We marry the nuclei view on subframe properties with the framework of "describable operations" introduced to study subframe logics in Frank Wolter's PhD Thesis (1993). Wolter's original setting was restricted to classical modal logics, but with minimal care his setup can be made to work intuitionistically and nuclea provide the missing ingredient to make it fruitful. From this perspective, we revisit the FSCD 2017 study soundness and completeness of negative translations in modal logic (joint work with Miriam Polzer and Ulrich Rabenstein) and our present study of extension stability for preservativity logics based on the constructive strict implication (jointly with Albert Visser). Various characterization and completeness results can be obtained in a generic way.
[This instalment of the seminar is joint with TULIPS at the Department of Philosophy and Religious Studies of Utrecht University.]
Title: Modal logic and measurable cardinals
Speaker: Guram Bezhanishvili (New Mexico State University)
I will discuss our recent result that characterizes the existence of a measurable cardinal in terms of the topological completeness of a simple modal logic with respect to a normal space.
Joint work with N. Bezhanishvili, J. Lucero-Bryan, and J. van Mill
Title: Circular proofs for the hybrid mu-calculus
Speaker: Sebastian Enqvist (Stockholms universitet)
Circular proofs have recently been used by Afshari and Leigh to prove completeness of a cut-free sequent calculus for the modal mu-calculus. They make use of an annotated circular proof system due to Jungteerapanich and Stirling, which uses a system of names for fixpoint unfoldings. In this talk I present a circular proof system in the same style for the hybrid mu-calculus, extending the modal mu-calculus with nominals and satisfaction operators. My hope is that this work will be a starting point towards developing complete proof systems for extensions of the mu-calculus that lack the tree model property, like guarded fixpoint logic and two-way mu-calculus. I will present the proof system and outline some key parts of the completeness proof, and try to explain some of the difficulties posed by the hybrid operators.
Title: Unification in coalgebraic modal logics
Speaker: Johannes Marti (ILLC)
Date: Wednesday, 1 April, 2020.
In this talk I present a characterization of the unification problem in coalgebraic modal logics in terms of the existence of a morphism between one-step coalgebras.
The unification problem is the following decision problem: Given two terms in the free algebra of some variety, is there a substitution under which they are equal? In the work presented in this talk we consider the unification problem for varieties whose free algebras are the syntactic algebras of coalgebraic modal logics. We apply ideas from Ghilardi's characterization of the substitution in the theory of normal forms to reformulate the unification problem on the level of the one-step coalgebras that are dual to the one-step algebras approximating the free algebra. The advantage of this approach is that in many interesting cases the points of one-step coalgebras are finite tree-like objects and the unification problem becomes a combinatorial problem about these trees.
We consider three instances of this characterization:
We obtain a new proof of the decidability of unification for the modal logic (Alt1), which was originally established by Balbiani and Tinchev. (Alt1) is the modal logic of the class of frames in which every world has at most one successor. With our characterization the decidability of unification in this logic follows from a simple pumping argument.
As a variation on the previous example we consider the logic over frames in which every world has exactly one successor that is labeled with either 0 or 1. In this case the unification problem can be reformulated as the question whether there exists a graph homomorphism from some de Bruijn graph into a graph given in the input. So far, we have not been able to establish whether this problem is decidable or not.
An important motivation for developing our characterization is that the modal logic K can be presented as a coalgebraic modal logic. Despite considerable efforts the decidability of unification in K is an open question.
Title: Nested sequents for the logic of conditional belief
Speaker: Marianna Girlando (Inria Saclay - LIX)
Location: Room F3.20, Science Park 107, Amsterdam.
Conditional Doxastic Logic (CDL) is a multi-agent epistemic logic introduced by Baltag and Smets to model the conditional beliefs of an agent. In this talk I introduce a natural neighbourhood semantics for the static fragment of CDL. I then present a nested sequent calculus, sound and complete with respect to neighbourhood models for CDL. The proof system is analytic, has a direct formula interpretation and provides a decision procedure for CDL. Moreover, since the logic can be embedded into the multi-agent version of modal logic S5, called S5i, our proof system can be used to define a nested sequent calculus for S5i.
This talk is based on a joint work with Björn Lellmann and Nicola Olivetti.
Anne Troelstra Memorial Event
Date: Friday, March 6, 2020
Location: Amsterdam Science Park Congress Centre, Science Park 105, Amsterdam.
[More information]
Title: From Complementary Logic to Proof-Theoretic Semantics
Speaker: Gabriele Pulcini (ILLC)
Two proof-systems P and P* are said to be complementary when one proves exactly the non-theorems of the other. Complementary systems come as a particular kind of refutation calculi whose patterns of inference always work by inferring unprovable conclusions form unprovable premises.
In the first part of my talk, I will focus on LK*, the sequent system complementing Gentzen system LK for classical logic. I will show, then, how to enrich LK* with two admissible (unary) cut rules, which allow for a simple and efficient cut-elimination algorithm. In particular, two facts will be highlighted: 1) for any given provable sequent, complementary cut-elimination always returns one of its simplest proofs, and 2) provable LK* sequents turn out to be "deductively polarized" by the empty sequent.
In the second part, I will observe how an alternative complementary sequent system can be obtained by slightly modifying Kleene's system G4. I will finally show how this move could pave the way for a novel approach to multi-valuedness and proof-theoretic semantics for classical logic.
Title: One step to admissibility in intuitionistic Gödel-Löb logic
Speaker: Iris van der Giessen (Universiteit Utrecht)
I would like to present ongoing work on intuitionistic modal logics iGL and iSL which have a close connection to the (unknown!) provability logic of Heyting Arithmetic. Classically, Gödel-Löb logic GL admits a provability interpretation for Peano Arithmetic. iGL is its intuitionistic counterpart and iSL is iGL extended by explicit completeness principles. I will characterize both systems via an axiomatization and in terms of Kripke models. The main goal is to understand their admissible rules in order to get insight in the structure of those logics. To do so, I want to focus on one step in this direction: Ghilardi's wonderful result connecting projective formulas to the extension property in Kripke models.
Title: Guarded Recursion for Coinductive and Higher-Order Stochastic Systems
Speaker: Henning Basold (Universiteit Leiden)
Date: Wednesday, 22 January, 2020.
In stochastic modelling, as it appears in artificial intelligence, biology or physics, one often encounters the question of how to model a probabilistic process that may run indefinitely. Such processes have been described in non-probabilistic programming successfully by using coinductive types and coiteration. Vákár et al. (POPL'19) introduced semantics for a higher-order probabilistic programming language with full recursion in types and programs based on quasi-Borel spaces. Full recursion allows the introduction of coinductive types, but this comes at the price of losing termination and productivity.
In this talk, we will discuss a language and its semantics for stochastic modelling of processes that run potentially indefinitely, like random walks, and for distributions that arise from indefinite runs, like geometric distributions. Such processes and distributions will be described as so-called coinductive types and programmed via so-called guarded recursion, which ensures that the programs describing random walks etc. are terminating. Central to guarded recursion is the so-called later modality, which allows one to express that the computation of an element of a type is delayed. This allows one to use type-checking to ensure termination instead of, e.g., syntactic guardedness. This eases reasoning about such processes, while the recursion offered by the language is still convenient to use.
The language appeared in the paper "Coinduction in Flow — The Later Modality in Fibrations" that I presented at CALCO'19, see here.
Workshop on the occasion of Dick de Jongh's 80th Birthday
When: Wednesday, November 27, 2019
Where: ILLC's Common Room, SP107, F1.21.
Title: Points of Boolean contact algebras
Speaker: Rafał Gruszczyński (Nicolaus Copernicus University in Toruń)
One of the main tasks of philosophically oriented point-free theories of space is to deliver a plausible definition of points, entities which are assumed as primitives in point-based geometries and topologies. It is understood that - unlike in case of algebraically oriented theories of frames and locales - such definitions should be intuitive from geometrical point of view, and should refer to objects which can be interpreted in the real world. Thus points are replaced by regions and the former are explained as abstractions from the latter. The main purpose of the talk is to draw a comparison between three seminal defintions of points: Andrzej Grzegorczyk's, Alfred Whitehead's and Hendrik De Vries's, within the framework of Boolean contact algebras.
Title: Profinite Heyting algebras and the representation problem for Esakia spaces
Speaker: Tommaso Moraschini (Czech Academy of Sciences and University of Barcelona)
A poset is said to be "representable" if it can be endowed with an Esakia topology. Gratzer's classical representation problem asks for a description of representable posets. A solution to this problem is not expected to take a simple form, as representable posets do not form an elementary class. Since at the moment a solution to the representation problem seems out of reach, we will address a simpler version of the problem which, roughly speaking, asks to determine the posets that may occur as top parts of Esakia spaces. Crossing the bridge between algebra and topology, this task amounts to characterizing the profinite Heyting algebras that are also profinite completions of some Heyting algebras. We shall report on the on-going effort to solve this problem, and on some preliminary results.
This talk is based on joint work with G. Bezhanishvili, N. Bezhanishvili, and M. Stronkowski.
Workshop on the occasion of Frederik Lauridsen's PhD defense
Title: Algebraic and Proof Theoretic Methods in Non-Classical Logic
When: Thursday and Friday, October 10 - 11, 2019
Where: Potgieterzaal (C0.01), University Library, Singel 425, Amsterdam.
Title: Internal PCAs and their Slices
Speaker: Jetze Zoethout (Utrecht University)
A partial combinatory algebra (PCA) is an abstract model of computability. Every such PCA gives rise to a category of assemblies, which can be seen as the category of all datatypes that can be implemented in this PCA. It turns out that the class of all categories that arise in this way is not closed under slicing.
In this talk, we will study a generalization of PCAs introduced by Wouter Stekelenburg, which has the property that the corresponding class of categories of assemblies is closed under slicing. These more general PCAs are constructed internally in a regular category, and express a relative notion of computability. We show how to perform the slice construction at the level of PCAs, and we show that these generalized PCAs, like the classical PCAs, form a 2-category. Moreover, we generalize the notion of computational density studied by Pieter Hofstra and Jaap van Oosten to the current setting.
Title: A Quillen model structure for bigroupoids and pseudofunctors
Speaker: Martijn den Besten (ILLC)
A key notion in modern homotopy theory is that of a Quillen model structure. This concept was designed to capture axiomatically some of the essential properties of homotopy of topological spaces. A model structure is determined by specifying three classes of morphisms satisfying certain conditions. These conditions are motivated by the properties of certain classes of continuous functions in the category of topological spaces. Indeed, categories whose objects can be thought of as spaces often admit the structure of a model category. Awodey and Warren showed that Martin-Löf type theory with identity types can essentially be interpreted in any such category.
In this talk, I will present a model structure on the category of bigroupoids and pseudofunctors. In the construction of the model structure, two coherence theorems are used, which allow us to recognize that certain diagrams commute at a glance. I will also spend some time discussing these theorems.
Title: Spatial Model Checking and Applications to Medical Image Analysis
Speaker: Vincenzo Ciancia (Institute of Information Science and Technologies "A. Faedo")
Date: Tuesday, 1 October, 2019.
Spatial aspects of computation are prominent in Computer Science, especially when dealing with systems distributed in physical space or with image data acquired from various sources. However, formal verification techniques are usually concerned with temporal properties and do not explicitly handle spatial information.
Our work stems from the topological interpretation of modal logics, the so-called Spatial Logics. We present a topology-based approach to model checking for spatial and spatio-temporal properties. Our results include theoretical developments in the more general setting of Cech closure spaces, a study of a "collective" variant, that has been compared to region calculi in recent work, publicly available software tools, and some case studies in the setting of smart transportation.
In recent joint work with the University Hospital of Siena, we have explored the application domain of automatic contouring in Medical Imaging, introducing the tool VoxLogicA, which merges the state-of-the-art imaging library ITK with the unique combination of declarative specification and optimised execution provided by spatial model checking. The analysis of an existing benchmark of medical images for segmentation of brain tumours shows that simple VoxLogicA specifications can reach state-of-the-art accuracy, competing with best-in-class algorithms based on machine learning, with the advantage of explainability and easy replicability.
Title: Exponentiability and Theories of Dependent Types
Speaker: Taichi Uemura (ILLC)
Date: Wednesday, 11 September, 2019.
I give a close relationship between exponentiable arrows in categories with finite limits and dependent type theories. Precisely, the opposite of the category of finitely presentable dependent type theories has an exponentiable arrow and is freely generated by this arrow (and some others) in a suitable sense. I will discuss some applications of this observation to the semantics of dependent type theory.
Title: Coalgebraic positive logic and lifting functors
Speaker: Jim de Groot (The Australian National University)
We recall coalgebraic semantics of positive modal logic and consider several examples. We encounter coalgebras for endofunctors on Pre, Pos and Pries, the categories of preorders, posets and Priestley spaces, respectively, and interpret Dunn's positive modal logic on these via predicate liftings. Thereafter, we investigate how endofunctors on these categories relate. In particular, we give two ways of lifting functors from Pos to Pries and investigate when these methods coincide.
Title: Completeness of Game Logic
Speaker: Helle Hvid Hansen (Delft University of Technology)
Game logic was introduced by Rohit Parikh in the 1980s as a generalisation of propositional dynamic logic (PDL) for reasoning about outcomes that players can force in determined 2–player games. Semantically, the generalisation from programs to games is mirrored by moving from Kripke models to monotone neighbourhood models. Parikh proposed a natural PDL-style Hilbert system which was easily proved to be sound, but its completeness has thus far remained an open problem.
In this talk, I will present the results from a paper that will be presented at LICS 2019. In the paper, we present a cut-free sequent calculus for game logic and two cut–free sequent calculi that manipulate annotated formulas, one for game logic and one for the monotone μ–calculus. We show these systems are sound and complete, and that completeness of Parikh's axiomatization follows. Our approach builds on recent ideas and results by Afshari & Leigh (LICS 2017) in that we obtain completeness via a sequence of proof transformations between the systems. A crucial ingredient is a validity-preserving translation from game logic to the monotone μ–calculus.
This is joint work with Sebastian Enqvist, Clemens Kupke, Johannes Marti and Yde Venema.
Title: A universal proof theoretical approach to interpolation
Speaker: Raheleh Jalali (Institute of Mathematics of the Czech Academy of Sciences)
Date: Wednesday, 1 May, 2019.
In her recent works, Iemhoff introduced a connection between the existence of terminating sequent calculi of a certain kind and the uniform interpolation property of the super–intuitionistic logic that the calculus captures. In this talk, we will generalize this relationship to also cover the sub–structural setting on the one hand and a much more powerful class of rules on the other. The resulting relationship then provides a uniform method to establish uniform interpolation property for the logics FLe, FLew, CFLe, CFLew, IPC, CPC, their K and KD–type modal extensions and some basic non–normal modal logics including E, M, MC and MN. More interestingly though, on the negative side, we will show that no extension of FLe can enjoy a certain natural type of terminating sequent calculus unless it has the uniform interpolation property. It excludes almost all super–intutionistic logics (except seven of them), the logic K4 and almost all the extensions of the logic S4 (except six of them) from having such a reasonable calculus.
Title: Towards a Logic for Conditional Strategic Reasoning
Speaker: Valentin Goranko (Stockholm University)
This talk is based on a work in progress, joint with Fengkui Ju, Beijing Normal University.
We consider systems of rational agents who act in pursuit of their individual and collective objectives and we study the reasoning of an agent or an external observer about the consequences from the expected choices of action of the other agents, based on their objectives, in order to assess the agent's ability to achieve her own objective.
For instance, consider a scenario where an agent, Alice, has an objective O(A) to achieve. Suppose, that Alice has several possible choices of an action (or, strategy) that would possibly, or certainly, guarantee the achievement of her objective O(A). Now, Bob, another agent or an external observer, is reasoning about the consequences from Alice's possible actions with respect to the occurrence of Bob's objective or intended outcome O(B). Depending on Bob's knowledge about Alice's objective and of her available strategic choices that would guarantee the achievement of that objective, there can be several possible cases for Bob's reasoning, based on whether or not Bob knows Alice's objective, her possible actions towards achieving that objective, and her intentions on how to act. Thus, Bob has to reason about his own abilities to achieve his objective O(B), conditional on what he knows or expect that Alice may decide to do. That scenario naturally extends to several agents reasoning about their abilities conditional on how they expect the others to act.
In some of these cases, reasoning about conditional strategic abilities can be formally described in Coalition Logic (CL) or in its temporal extension, the alternating time temporal logic ATL(*), with semantics involving strategy contexts. Other cases, however, require new and more expressive operators capturing more refined patterns of strategic ability, especially suited for the kind of conditional strategic reasoning in scenarios described above. In our work we introduce and study such new operators and logical languages that can capture variations of conditional strategic reasoning.
In this talk, after an informal discussion of conditional strategic reasoning, I will introduce some new strategic operators capturing such reasoning, will provide formal semantics for the resulting logical languages and will discuss briefly their expressiveness and some of their meta-properties. I will conclude with some open questions for future research.
[This instalment of the seminar was joint with LIRa Seminar.]
Title: Flat fixpoint logics with converse modalities
Speaker: Sebastian Enqvist (Stockholm University)
I present a completeness theorem for a class of flat modal fixpoint logics extended with the converse modality, continuing the work of Santocanale and Venema on flat fixpoint logics. I will also discuss some algebraic aspects of the results, in particular, concerning the property of constructiveness for free algebras and finitary O-adjoints.
Title: Cyclic proofs for circular reasoning
Speaker: Graham Leigh (University of Gothenburg)
Modal logic is an efficient and effective language for state-based reasoning. Its versatility is witnessed in its robustness when extended by non-local modalities: common knowledge, temporal modalities, computationally motivated `path quantifiers' and dynamic modalities, and even provability. These are all examples of modalities expressing infinitary properties of state systems: reachability or closure under fixpoint semantics. Logics built over these fixpoint modalities have natural semantics but provide a clear challenge to the development of sound and complete proof systems. In this talk, I will overview the proof theory of these expressive modal logics with a focus on re-interpreting the notion of proof in view of semantic considerations.
Title: Trajectory domains: analyzing the behavior of transition systems
Speaker: Levin Hornischer (ILLC)
We develop a novel way to analyze the possible long-term behavior of transition systems. These are discrete, possibly non-deterministic dynamical systems. Examples range from computer programs, over neural- and social networks, to `discretized' physical systems. We develop a notion of when two trajectories in a system—i.e., possible sequences of states—exhibit the same type of behavior (e.g., agreeing on equilibria and oscillations). Our object of study thus is the partial order of `behaviors', i.e., equivalence classes of trajectories ordered by extension.
We identify subsystems such that, if absent, this behavior poset is an algebraic domain—whence the name `trajectory domain'. We show that they are in correspondence with a particular category of partial orders. We investigate the natural topology on the possible long-term behaviors. Finally, we comment on possible logics to reason about the behavior of a system and note curious similarities to spacetimes.
Title: Coalgebra Learning via Duality
Speaker: Clemens Kupke (University of Strathclyde)
A key result in computational learning theory is Dana Angluin's L* algorithm that describes how to learn a deterministic finite automaton (DFA) using membership and equivalence queries. I will present a generalisation of this algorithm to the level of coalgebras. The approach relies on the use of logical formulas as tests, based on a dual adjunction between states and logical theories. This allows us to learn, e.g., labelled transition systems, using modal formulas as tests.
Joint work with Simone Barlocco and Jurriaan Rot.
Title: Modal logics of polygons and beyondy
Speaker: David Gabelaia (Tbilisi State University, A. Razmadze Mathematical Institute)
We will present results of an ongoing project investigating modal logics that arise from interpreting modal formulas as piecewise linear subsets (polyhedra) of an Euclidean space of dimension n>0.
We will focus in the talk on the case of n=2 and show that the modal logic of 2D polygons is finitely axiomatizable, has the finite model property in its Kripke semantics and is decidable. The finite axiomatization boils down semantically to identifying the finite set of `forbidden' Kripke frames, to which non-frames of the logic are reducible. We will discuss these `forbidden configurations' and their geometric meaning. Some of the observations made for n=2 generalize to higher dimensions, however the cases for n > 2 present substantial challenges some of which we also hope to highlight in the talk.
This is a joint work with members of Esakia seminar.
Title: Logical aspects of algebra-valued models of set theory
Speaker: Robert Paßmann (ILLC)
Date: Wednesday, 19 December, 2018.
In this talk, we will be concerned with the propositional logics of algebra-valued models of set theory. First, we will recall the basic construction, and introduce the notions of loyalty and faithfulness as measures for how much information about the underlying algebra is preserved in the model. Moreover, we will see how these notions connect to the de Jongh property. In the second part, we will survey some of the results we obtained so far in this area (partly joint with Galeotti, and with Löwe and Tarafder).
Title: Automata that Transform Infinite Words
Speaker: Jörg Endrullis (Vrije Universiteit Amsterdam)
Finite automata can act as acceptors, to define (sets of) words, and as transformers, that transform words. While automata on infinite words play a central role in model checking and in the theory of automatic sequences, the transformational aspect of automata has hardly been studied for infinite words, also known as streams. There is a complete lack of theory to even answer simple concrete questions whether certain transformations are possible.
This talk focuses on finite state transducers, a generalised form of Mealy machines, and the ensuing stream transformation (also called transduction). The transducibility relation on streams gives rise to a hierarchy of stream degrees, analogous to the well-known Turing degrees. We refer to this hierarchy as the Transducer degrees. In contrast to the Turing degrees, which have been studied extensively in the 60ths and 70ths, the Transducer degrees are largely unexplored territory. Despite finite state transducers being very simple and elegant devices, we hardly understand their power for transforming streams. In this talk, I will mention a few results that we have obtained in our initial exploration of the Transducer degrees, and several challenging open questions.
Title: Beth definability and the Stone-Weierstrass Theorem
Speaker: Luca Reggio (Czech Academy of Sciences)
Date: Wednesday, 28 November, 2018.
Weierstrass approximation theorem states that any continuous real-valued function defined on a closed real interval can be uniformly approximated by polynomials. In 1937 Marshall Stone proved a vast generalisation of this result: nowadays known as the Stone-Weierstrass theorem, this is a fundamental result of functional analysis with far-reaching consequences. We show, through duality theory, that the Stone-Weierstrass theorem is a consequence of the Beth definability property of a certain logic, stating that implicit definability is equivalent to explicit definability.
Title: What is the size of a formula? Basic size matters in the modal mu-calculus
Speaker: Yde Venema (ILLC)
There are at least three ways to define the size of a formula in the modal mu-calculus: as its length (number of symbols), as its subformula size (number of subformulas) or as its closure-size (the size of its Fischer-Ladner closure set). While it is well-known that the subformula size of a formula can be exponentially smaller than its length, it came as a surprise to many when Bruse, Friedman & Lange showed that the closure size can be exponentially smaller then the subformula size. In the talk we discuss some size matters in the mu-calculus. The main point will be to argue that closure size is a more natural notion than subformula size. All definitions will be defined and explained.
The talk reports on ongoing joint work with Clemens Kupke and Johannes Marti.
Title: Etale groupoid convolution algebras
Speaker: Benjamin Steinberg (CUNY)
In a 2010 paper I assigned a convolution algebra over any commutative base ring to an etale groupoid with totally disconnected unit space. These kinds of algebras include several families of well studied rings including group rings, commutative algebras generated by idempotents over fields, crossed products of the previous two examples, inverse semigroup rings and Leavitt path algebras. All these algebras are discrete analogues of well studied C*-algebras. It turns out that there are a number of analogies between algebraic properties of etale groupoid convolutions algebras and the C*-algebras of the groupoids (e.g., simplicity, Cartan pairs). In this talk we will survey examples, recent results and highlight the connection between sheaves over the etale groupoid and modules over the algebra.
Title: Difference hierarchies and duality
Speaker: Célia Borlido (Laboratoire J.A. Dieudonné)
The notion of a difference hierarchy, first introduced by Hausdorff, plays an important role in many areas of mathematics, logic and theoretical computer science such as in descriptive set theory, complexity theory, and in the theory of regular languages and automata. From a lattice theoretic point of view, the difference hierarchy over a bounded distributive lattice stratifies the Boolean algebra generated by it according to the minimum length of difference chains required to describe each Boolean element. While each Boolean element has a description by a finite difference chain, there is no canonical such writing in general. In this talk I will show that, relative to the filter completion, or equivalently, the lattice of closed upsets of the dual Priestley space, each Boolean element over the lattice has a canonical minimum length decomposition into a Hausdorff difference. As a corollary each Boolean element over a (co-)Heyting algebra has a canonical difference chain. With a further generalization of this result involving a directed family of adjunctions with meet-semilattices, we give an elementary proof of the fact that a regular language is given by a Boolean combination of purely universal sentences using arbitrary numerical predicates if and only if it is given by a Boolean combination of purely universal sentences using only regular numerical predicates.
This is based on joint work with Gehrke, Krebs and Straubing.
Title: Point-free geometries - foundations and systems
Speaker: Rafał Gruszczyński (Nicolaus Copernicus University)
In a nutshell, point-free geometry is a branch of geometry in which the notion of point is absent from the inventory of basic concepts. Instead, regions or spatial bodies are assumed as primitives, next to an order relation on elements of the domain. In my talk I would like to discuss philosophical assumptions of point-free geometry and present two such systems: first, due to Polish mathematician Aleksander Śniatycki, based on the notion of half-plane, and second, due to Giangiacomo Gerla and the speaker, based on the notion of oval (a point-free counterpart of the notion of convex set). Slides.
Title: Hyper-MacNeille completions of Heyting algebras
Speaker: Frederik M. Lauridsen (ILLC)
Many intermediate logics for which no cut-free Gentzen-style sequent calculus is known may be captured by so-called hypersequent calculi without the cut-rule. For a large class of such hypersequent calculi a uniform algebraic proof of cut-admissibility may be given (Ciabattoni, Galatos, and Terui, 2017). The central construction of this proof gives rise to a new type of completion; the, so-called, hyper-MacNeille completion. We will briefly review the connection between the algebraic proof of cut-admissibility and MacNeille completions of Heyting algebras. We will then focus on describing the hyper-MacNeille completion of Heyting algebras. In particular, we will discuss the relationship between the MacNeille and the hyper-MacNeille completion.
This is ongoing work with John Harding.
Title: A perspective on non-commutative frame theory
Speaker: Ganna Kudryavtseva (University of Ljubljana)
We discuss an extension of fundamental results of frame theory to a non-commutative setting where the role of locales is taken over by etale localic categories. These categories are put in a duality with complete and infinitely distributive restriction monoids (restriction monoids being a well-established class of non-regular generalizations of inverse monoids). As a special case this includes the duality between etale localic groupoids and pseudogroups (defined as complete and infinitely distributive inverse monoids). The relationship between categories and monoids is mediated by a class of quantales called restriction quantal frames. Projecting down to topological setting, we extend the classical adjunction between locales and topological spaces to an adjunction between etale localic categories and etale topological categories. As a consequence, we deduce a duality between distributive restriction semigroups and spectral etale topological categories. Our work unifies and upgrades the earlier work by Pedro Resende, and also by Mark V. Lawson and Daniel H. Lenz.
The talk is based on a joint work with Mark V. Lawson.
Title: How to extend de Vries duality to completely regular spaces
De Vries duality yields a dual equivalence between the category of compact Hausdorff spaces and a category of complete Boolean algebras with a proximity relation on them, known as de Vries algebras. I will report on a recent joint work with Pat Morandi and Bruce Olberding on how to extend de Vries duality to completely regular spaces by replacing the category of de Vries algebras with certain extensions of de Vries algebras. This we do by first formulating a duality between compactifications and de Vries extensions, and then specializing to the extensions that correspond to Stone-Čech compactifications.
Title: Fine-Grained Computational Complexity
Speaker: Jouke Witteveen (ILLC)
Classically, complexity theory focuses on the hardest instances of a given length. A set is in P if there is a decision procedure for it that runs in polynomial time even on the most difficult-to-decide instances. Parameterized complexity theory, on the other hand, looks at the identification of easy instances. In this talk, we shall define parameterizations as independent objects and show that the class of parameterizations naturally forms a lattice. The parameterizations that put a given set in any of the standard parameterized complexity classes are filters in this lattice. From these insights, we conjecture a separation property for P.
Title: W Types with Reductions
Speaker: Andrew Swan (ILLC)
In type theory many inductively defined types can be formulated using the notion of W type. A basic example is the natural numbers. A natural number is either 0, or the successor Succ(n) of a natural number n we have already constructed. Moerdijk and Palmgren showed that when we interpret type theory in a locally cartesian closed category, W types can be characterised elegantly as the initial algebras for certain endofunctors (polynomial endofunctors). For example the natural numbers are the initial algebra for the endofunctor X → X + 1.
I'll show a generalisation called W types with reductions. For example, we can modify the natural numbers by adding the following reduction. We start off with the natural numbers as usual now , but later we identify each Succ(n) with its predecessor n (and so everything ends up identified with 0). We can use this idea to capture constructions in homotopical algebra where we modify a space by pasting on new cells and repeat this transfinitely. It is impossible to formulate this idea using initial algebras of endofunctors, so we move to pointed endofunctors, which are endofunctors P equipped with a natural transformation Id ⇒ P. Polynomial endofunctors can be generalised to *pointed* polynomial endofunctors, whose initial algebras are W types with reductions. In a large class of categories, including all predicative toposes, W types with reductions can always be constructed.
Title: A simple propositional calculus for compact Hausdorff spaces
Speaker: Nick Bezhanishvili (ILLC)
In recent years there has been a renewed interest in the modal logic community toward Boolean algebras equipped with binary relations. The study of such relations and their representation theory has a long history, and is related to the study of point-free geometry, point-free topology, and region based theory of space. Our primary examples of Boolean algebras with relations will be de Vries algebras, which are dual to compact Hausdorff spaces. Our main goal is to use the methods of modal logic and universal algebra to investigate the logical calculi of Boolean algebras with binary relations. This will lead, via de Vries duality, to simple propositional calculi for compact Hausdorff spaces, Stone spaces, etc.
Title: Uniform interpolation via an open mapping theorem for Esakia spaces
Speaker: Sam van Gool (ILLC)
We prove an open mapping theorem for the topological spaces dual to finitely presented Heyting algebras. This yields in particular a short, self-contained semantic proof of the uniform interpolation theorem for intuitionistic propositional logic, first proved by Pitts in 1992. Our proof is based on the methods of Ghilardi & Zawadowski. However, our proof does not require sheaves nor games, only basic duality theory for Heyting algebras.
S. J. v. Gool and L. Reggio, An open mapping theorem for finitely copresented Esakia spaces, Topology and its Applications, vol. 240, 69-77 (2018).
Title: Classical equivalents of intuitionistic implication
Speakers: Esther Boerboom and Noor Heerkens (ILLC)
The objective of our study was to find suitable classical equivalents of intuitionistic implication. Since the formula p implies q has infinitely many classical equivalents in the full fragment of intuitionistic propositional logic, we restricted ourselves first of all to finite fragments. In order to find the most suitable candidates we examined important features of the candidates in these fragments, such as reflexivity and transitivity. Additionally we examined if the formulas are weaker or stronger than intuitionistic implication and whether they are exact.
Title: Long-Term Values in Markov Decision Processes, (Co)Algebraically
Speaker: Frank Feys (TU Delft)
Markov Decision Processes (MDPs) provide a formal framework for modeling sequential decision making, used in planning under uncertainty and reinforcement learning. In the classical theory, in an MDP the objective of the agent is to make choices such that the expected total rewards, the long-term values, are maximized; a rule that in each state dictates the agent which choice to make in order to achieve such an optimal outcome is called an optimal policy. A classical algorithm to find an optimal policy is Policy Iteration. In this talk, we show how MDPs can be studied from the categorical perspective of coalgebra and algebra. Probabilistic systems, similar to MDPs but without rewards, have been extensively studied, also coalgebraically, from the perspective of program semantics. We focus on the role of MDPs as models in optimal planning, where the reward structure is central. Our aim is twofold. First, to give a coinductive explanation of the correctness of Policy Iteration using a new proof principle, based on Banach's Fixpoint Theorem, that we called Contraction Coinduction. Second, to show that the long-term value function of a policy with respect to discounted sums can be obtained via a generalized notion of corecursive algebra, designed to take boundedness into account.
(This talk is based on joint work with Helle Hvid Hansen and Larry Moss.)
Title: MacNeille transferability
In 1966 Grätzer introduced the notion of transferability for finite lattices. A finite lattice L is transferable if whenever L has an embedding into the ideal completion of a lattice K, then L already has an embedding into K. In this talk we will introduce the analogous notion of MacNeille transferability, replacing the ideal completion with the MacNeille completion. We will pay particular attention to MacNeille transferability of finite distributive lattices with respect to the class of Heyting algebras. This will also allow us to find universal classes of Heyting algebras closed under MacNeille completions.
This is joint work with G. Bezhanishvili, J. Harding, and J. Ilin.
Title: Path categories
The purpose of this talk is to introduce the notion of a path category (short for a category with path objects). Like other notions from homotopical algebra, such as a category of fibrant objects or a Quillen model structure, it provides a setting in which one can develop some homotopy theory. For a logician this type of category is interesting because it provides a setting in which many of the key concepts of homotopy type theory (HoTT) make sense. Indeed, path categories provide a syntax-free way of entering the world of HoTT, and familiarity with (the syntax of) type theory will not be assumed in this talk. Instead, I will concentrate on basic examples and results. (This is partly based on joint work with Ieke Moerdijk.)
Title: Duality in Logic and Computer Science
Speakers: Thijs Benjamins, Chase Ford, Kristoffer Kalavainen, Kyah Smaal, and Tatevik Yolyan (ILLC)
This special session of the A|C seminar will consist of five 25-minute presentations by the participants of the MoL January project taught by Sam van Gool. Each participant studied one recent article on duality theory and its applications in logic and computer science, and will give a presentation in which they summarize the main results of the paper and highlight points of particular interest. The following papers will be presented:
G. Bezhanishvili & N. Bezhanishvili, An Algebraic Approach to Canonical Formulas: Modal Case (2011), presented by Kyah Smaal;
T. Colcombet & D. Petrisan, Automata and Minimization (2017), presented by Thijs Benjamins;
S. Ghilardi, Continuity, Freeness and Filtrations (2010), presented by Tatevik Yolyan;
W. Holliday, Possibility Frames and Forcing for Modal Logic (2016), the section on duality theory, presented by Chase Ford;
T. Place & M. Zeitoun, Separating Regular Languages with First-order Logic (2016), presented by Kristoffer Kalavainen.
There will be some time for discussion and Q&A after each presentation.
Title: Stone-Priestley duality for MTL triples
Speaker: Wesley Fussner (University of Denver)
Triples constructions date back to Chen and Grätzer's 1969 decomposition theorem for Stone algebras: Each Stone algebra is characterized by the triple consisting of its lattice of complemented elements, its lattice of dense elements, and a homomorphism associating these structures. There is a long history of dual analogues of this construction, with Priestley (1972) providing a conceptually-similar treatment on duals in 1972 and Pogel also exploring dual triples in his 1998 thesis. At the same time, triples decompositions have been extended to account for richer algebraic structure. For example, Aguzzoli-Flaminio-Ugolini and Montagna-Ugolini have provided similar triples decompositions for large classes of monoidal t-norm (MTL) algebras, the algebraic semantics for monoidal t-norm based logic. Here we provide a duality theoretic perspective on these recent innovations, and see that the Stone-Priestley duality offers a clarifying framework that sheds light on these constructions
This is joint work with Sara Ugolini.
Title: Choice-free Stone duality
In this talk I will discuss a Stone-like topological duality for Boolean algebras avoiding the Prime Filter Theorem. This is joint work with Wes Holliday.
Title: Ordering Free Groups
Speaker: George Metcalfe (Universität Bern)
Ordering conditions for groups provide useful tools for the study of various relationships between group theory, universal algebra, topology, and logic. In this talk, I will describe a new algorithmic ordering condition for extending partial orders on groups to total orders. I will then show how this condition can be used to show that extending a finite subset of a free group to a total order corresponds to checking validity of a certain inequation in the class of totally ordered groups. As a direct consequence, we obtain a new proof that free groups are orderable.
Title: Modal logics of the real line
Speaker: Andrey Kudinov (HSE Moscow)
The real line is probably the most well known and well studied topological space. There are 6 different combinations of languages of this kind (two unimodal and four bimodal). The first modality in bimodal and the modality in unimodal languages we will interpret either using closure or derivation topological operators. For the second modality in bimodal settings we use universal or difference modalities. We will discuss logics of the real line that arise in all these languages.
Title: Nominal Sets in Constructive Set Theorys
Nominal sets are simple structures originally used (by Gabbay and Pitts) to provide an abstract theory of the binding of variables. Since then, they have seen application in other areas, including the semantics of homotopy type theory. A basic notion in nominal sets is that of finite support. However, if we work in constructive set theory (roughly speaking, set theory avoiding the axiom of excluded middle), then finite sets can behave very differently to how they behave with classical logic. I will give a couple of examples to show how finite support in nominal sets can behave in constructive set theory. The results illustrate how the notion of finite differs in constructive set theory, and the proof illustrates some common techniques used in constructive set theory.
Title: Some model theory for the modal μ-calculus
We discuss a number of semantic properties pertaining to formulas of the modal μ-calculus. For each of these properties we provide a corresponding syntactic fragment, in the sense that a mu-calculus formula φ has the given property iff it is equivalent to a formula φ' in the corresponding fragment. Since this formula φ' will always be effectively obtainable from φ, as a corollary, for each of the properties under discussion, we prove that it is decidable in elementary time whether a given μ-calculus formula has the property or not. The properties that we study have in common that they all concern the dependency of the truth of the formula at stake, on a single proposition letter p. In each case the semantic condition on φ will be that φ, if true at a certain state in a certain model, will remain true if we restrict the set of states where p holds, to a special subset of the state space. Important examples include the properties of complete additivity and (Scott) continuity, where the special subsets are the singletons and the finite sets, respectively. Our proofs for these chacracterization results will be automata-theoretic in nature; we will see that the effectively defined maps on formulas are in fact induced by rather simple transformations on modal automata.
Title: On generalized Van-Benthem-type characterizations
Speaker: Grigory Olkhovikov (Ruhr-Universitaet Bochum, Institut fuer Philosophie II)
A guarded connective is a propositional formula built from monadic atoms preceded by a (possibly empty) prefix which consists of quantifiers guarded by chains of binary relations. An application of guarded connective to a tuple of monadic formulas is the result of respective substitution of these formulas for monadic atoms in the connective. A guarded fragment is a set of formulas containing monadic atoms and closed w.r.t. substitutions into a fixed finite set of guarded connectives. A guarded fragment is thus a generalized version of a set of standard translations of formulas of a given intensional propositional logic into first-order classical logic. We generalize bisimulations so as to get a model-theoretic characterization of a wide class of guarded fragments. Van Benthem Modal Characterization Theorem itself, as well as many analogous results obtained for other intensional propositional systems, come out as special cases of this generalized result.
Title: Bitopology and four-valued logic
Speaker: Tomáš Jakl (Charles University, Prague and University of Birmingham)
Bilattices and d-frames are two different kinds of structures with a four-valued interpretation. Whereas d-frames were introduced with their topological semantics in mind, the theory of bilattices has a closer connection with logic. In this talk we introduce a common generalisation of both structures and show that this not only still has a clear bitopological semantics, but that it also preserves most of the original bilattice logic. Moreover, we also obtain a new bitopological interpretation for the connectives of four-valued logic.
Title: MV-algebras and the Pierce-Birkhoff conjecture
Speaker: Serafina Lapenta (University of Salerno)
In this talk, we present a new class of MV-algebras, namely fMV-algebras. They are obtained when we endow an MV-algebra with a "ring-like" product and a scalar multiplication. We will discuss this class from an algebraic point of view, and characterize three relevant subclasses. The main motivation for this investigation is the long standing Pierce-Birkhoff conjecture, that aims to characterize the free lattice-ordered algebra (free lattice-ordered ring) as the algebra of piecewise polynomial functions with real coefficients (integer coefficients). The conjecture is standing since 1956: it has been solved for $n$ ≤ 2 by L. Mahé, while it is unsolved for $n$ >3. In this framework, the Pierce-Birkhoff conjecture became a normal form problem for the logic of fMV-algebras. Finally, we use the tensor product of MV-algebras – defined by D. Mundici – to provide a different perspective on the problem
Title: Studying profinite semigroups via logic
Speaker: Sam van Gool (CUNY City College and ILLC)
My aim in this talk is to introduce my current joint research project with Benjamin Steinberg on studying profinite semigroups via logic, and to indicate some tentative first results. A profinite semigroup is a Stone space equipped with a continuous semigroup operation. Through Stone duality, such semigroups correspond to certain Boolean algebras with operators. For this talk, the relevant instance of this phenomenon is the following. By a classical theorem of Schützenberger, the free profinite aperiodic semigroup over a finite set A is the dual space of the Boolean algebra of first-order definable sets of finite A-labelled linear orders ("A-words"). Therefore, elements of this semigroup can be viewed as elementary equivalence classes of models of the theory of finite A-words, in the sense of classical first-order model theory. We exploit this view of the free profinite aperiodic semigroup and use model-theoretic methods to prove both old and new things about it.
Title: Investigations on Gödel's incompleteness properties for guarded fragment and other decidable versions of FOL
Speaker: Mohamed Khaled (Central European University, Budapest)
The guarded fragment of first order logic was introduced by Andréka, van Benthem and Németi in [4]. It is very closely connected to FOL with generalized semantics which was introduced by Henkin in [1] and Németi in [2]. These logics were considered by many logicians and it was shown that they have a number of desirable properties, e.g. decidability. These logics are considered to be the most important decidable versions of first order logic among the large number that have been introduced over the years. They have applications in various areas of computer science and were more recently shown to be relevant to description logics and to database theory. For a survey on generalized semantics see [6]. In this talk, we investigate Gödel's incompleteness property for the above logics. We show that guarded fragment has neither Gödel's nor weak Gödel's incompleteness properties. The reason for that is the presence of the polyadic quantifiers. Indeed, we show that as a contrast, both the solo-quantifiers fragments of guarded fragment and FOL with generalized semantics have week Gödel's incompleteness property because of the absence of the polyadic quantifiers. All the results to be mentioned in this talk are either algebraic or induced from some algebraic results. For instance, we prove that Henkin's generalization of FOL with solo-quantifiers has weak Gödel's incompleteness property by showing that free algebras of the classes of relativized cylindric algebras are not atomic. This gives an answer for a long-standing open problem posed, by Németi in 1985, in [2], [3] and [5].
1. L. Henkin (1950). The Completeness of Formal Systems. PhD thesis, Princenton University, Princeton, USA.
2. I. Németi (1986). Free algebras and decidability in algebraic logic. Academic doctoral Dissertation, Hungarian Academy of Sciences, Budapest, Hungary.
3. H. Andréka, J. D. Monk and I. Németi (1991). Algebraic Logic. North Holland, Amsterdam, Colloquia Mathematica Societatis János Bolyai, 54.
4. H. Andréka, J. van Benthem and I. Németi (1998). Modal languages and bounded fragments of predicate logic. Journal of Philosophical Logic 27 (3), pp. 217-274.
5. H. Andréka, M. Ferenczi and I. Németi (2013). Cylindric-like algebras and algebraic logic. Springer Verlag.
6. H. Andréka, J. van Benthem, N. Bezhanishvili and I. Németi (2014). Changing a semantics: opportunism or courage? Springer Verlag.
Title: Subminimal Logics of Negation
Speaker: Almudena Colacito (ILLC)
Starting from the original formulation of Minimal Propositional Logic proposed by Johansson, we investigate some of its relevant sub- systems. The main focus is on the negation, defined as a primitive unary operator in the language. Each of the considered subsystems is axiomatized extending the positive fragment of intuitionistic logic by means of some axioms of negation. The basic logic of our setting is the one in which the negation operator has no properties at all, but the one of being a function. A Kripke semantics is developed for these logics, where the semantic clause for negation is determined by a persistent function N. The axioms defining extensions of the basic system enrich such a function with different properties (e.g., anti-monotonicity). We define a cut-free sequent calculus system and we work with it, proving some standard results for the considered logics. (This is a Master Thesis project supervised by Marta Bílková and Dick de Jongh)
Title: Weak Subintuitionistic Logic
Speaker: Fatemeh Shirmohammadzadeh Maleki (Shahid Beheshti University, Tehran and ILLC)
Subintuitionistic logics were studied by Corsi in 1987, who introduced a basic system F and by Restall in 1994, who defined a similar system SJ, both with a Kripke semantics. Basic logic, a much studied extension of F was already introduced by Visser in 1981. We will introduce a system WF, weaker than F and study it by means of a neighborhood semantics. (This is joint work with Dick de Jongh.)
Title: Monadic second order logic as the model companion of temporal logic
Speaker: Silvio Ghilardi (University of Milan)
In model theory, a model companion of a theory is a first-order description of the models where all solvable systems of equations and non-equations have solutions. We newly apply this model-theoretic framework in the realm of monadic second-order and temporal logic. (This is joint work with Sam van Gool)
Title: Duality for Non-monotonic Consequence Relations and Antimatroids
Speakers: Joahnnes Marti and Riccardo Pinosio (ILLC)
We present a duality between non-monotonic consequence relations over Boolean algebras and antimatroids over Stone spaces that extends the Stone duality. Non-monotonic consequence relations over Boolean algebras provide an abstract algebraic setting for the study of conditional logics and KLM-style non-monotonic consequence relations. Antimatroids over Stone spaces are a generalization of the usual order semantics for conditional logics and KLM-style systems. In the finite case antimatroids have already been studied as a combinatorial notion of convexity. The existing theory of antimatroids clarifies the constructions needed in completeness proofs for conditional logics and KLM-style systems with respect to the order semantics.
Title: Some open problems concerning MSO for coalgebras
Speaker: Sebastian Enqvist (ILLC)
I present some recent joint work with Fatemeh Seifan and Yde Venema, in which we introduced monadic second-order logic interpreted on coalgebras. Our main results provided conditions under which the coalgebraic modal mu-calculus for a given functor is the bisimulation invariant fragment of the corresponding MSO language. The focus of the talk will be on some open problems related to this topic.
Title: Derivative and counting operators on topological spaces
Speaker: Alberto Gatto (Imperial College London)
In the book 'Topological model theory' by Flum and Ziegler (1980), the authors introduced two first-order languages, L2 and its fragment Lt, to talk about topological spaces. Unlike L2, Lt enjoys several properties 'typical' of first-order logics such as compactness and Löwenheim Skolem Theorem; moreover Lt can express 'non-trivial' topological properties such as T0, T1, T2 and T3, and no language more expressive than Lt enjoys compactness and Löwenheim Skolem Theorem; finally, Lt is equivalent to the fragment of L2 that is invariant under changing basis.
Languages to talk about topological spaces have been defined in modal terms as well. They have a long history (see e.g. the seminal paper 'The algebra of topology' by McKinsey and Tarski (1944)) and there is ongoing interest in the field. The main idea is to associate propositional variables to points of a topological space and give a topologically flavored semantics to modal operators.
I will introduce the first-order languages L2 and Lt, and the modal language Lm with derivative and counting operators. I will then illustrate original work which establishes the equivalence between Lt and Lm over T3 spaces, and that the result fails over T2 spaces. I will then present a recent axiomatisation of the Lm theory of the classes of all T3, T2, and T1 spaces. I will then discuss the open problem of proving that Lm enriched with only finitely many other modal operators is still less expressive than Lt over T2 spaces, and present some partial results. Finally, I will conclude by illustrating possible directions of future work.
Title: Coalgebraic many-valued logics
Speaker: Marta Bílková (Charles University in Prague)
There are in general (at least) two ways how to obtain expressive modal logical languages for certain type of systems modelled as coalgebras: one is based on a single modal operator whose arity is given by the coalgebra functor and semantics given by relation lifting, the other is based on a logical connection (dual adjunction between semantics and syntax) producing, for a given coalgebra functor, all possible modalities (also known as predicate liftings). We discuss both of these approaches in the case valuations of formulas in coalgebraic models are many-valued, in particular taking values in a fixed residuated lattice or a quantale V. The possibilities we consider are: to stay in the category of sets, which in particular means, for the first approach, to prove a many-valued relation lifting theorem, and, for the second approach, to find an appropriate logical connection; or we can take the truth values seriously and change the base category to the enriched setting of V-categories. In the second case we are still able to provide a relation-lifting theorem. This is joint work with Matej Dostal.
Title: The Wellfounded Parts of Final Coalgebras are Initial Algebras
Speaker: Larry Moss (Indiana University Logic Program and Math department)
This talk is about the relation between initial algebras and final coalgebras. The prototypes of these results are about functors on sets. For many set functors F, the terminal coalgebra carries a complete metric which is the Cauchy completion of the metric induced on the initial algebra. In a different direction, the final coalgebra often has a CPO ordering which is the ideal completion of the ordering induced on the initial algebra. These basic results are due to Barr and Adamek. I'll review them, and then strike out on a different kind of relation between initial algebras and final coalgebras, one which I think clarifies the matter of relating initial algebras and final coalgebras. It is based on the categorical formulation of well-foundedness coming from Osius and then Taylor, and then further studied by Adamek, Milius, Sousa, and myself.
Title: Stable canonical rules: bounded proofs, dichotomy property and admissible bases
During the talk I shall review recent joint work (with G. & N. Bezhanishvili, D. Gabelaia, M. Jibladze) concerning canonical multi-conclusion rules. In particular, bases for admissible rules in Int, K4, S4 are obtained by establishing a dichotomy property (in Jerabek's style).
Title: Stone duality above dimension zero
Speaker: Vincenzo Marra (University of Milan)
In 1969 Duskin proved that the dual of the category $\mathsf{KHaus}$ of compact Hausdorff spaces and continuous maps is monadic over Set [2, 5.15.3]. As a consequence, $\mathsf{KHaus^{op}}$ is a variety, that is, it must be axiomatisable by equations in an algebraic language that is possibly infinitary (=operations of infinite arity are allowed). By contrast, it can be shown that the endofunctor of the induced monad on $\mathsf{Set}$ does not preserve directed colimits, and this entails that finitary operations do not suffice to axiomatise $\mathsf{KHaus^{op}}$. (Banaschewski proved a considerably stronger negative result in [1].) However, Isbell exhibited in [3] finitely many finitary operations, along with exactly one operation of countably infinite arity, that do suffice. The problem of axiomatising by equations this variety has so far remained open. Using Chang's MV-algebras as a key tool, we provide an axiomatisation that, moreover, is finite. We introduce by this finite axiomatisation the infinitary variety of $\delta$-algebras, and we prove that it is dually equivalent to $\mathsf{KHaus}$. In a very precise sense, this extends Stone duality from Boolean spaces to compact Hausdorff spaces.
1. Bernhard Banaschewski, More on compact Hausdorff spaces and finitary duality, Canad. J. Math. 36 (1984), no. 6, 1113–1118.
2. John Duskin,Variations on Beck's tripleability criterion, Reports of the Midwest Category Seminar, III, Springer, Berlin, 1969, pp. 74–129.
3. John Isbell,Generating the algebraic theory of C(X), Algebra Universalis 15 (1982), no. 2, 153–155.
Title: Modal logic of topology
Speaker: Nick Bezhanishvili (ILLC) and Jan van Mill (KdVI)
In this talk we will discuss topological semantics of the well-known modal logic S4.3. This logic is sound and complete with respect to hereditarily extremally disconnected spaces. We will show how to construct a Tychonoff hereditarily extremally disconnected space X whose logic is S4.3.
Title: Global caching for the alternation-free coalgebraic mu-calculus
Speaker: Daniel Hausmann (Friedrich-Alexander University of Erlangen and Nurnberg)
The extension of basic coalgebraic modal logic by fixed point operators leads to the so-called coalgebraic mu-calculus [1], generalizing the well-known (propositional) mu-calculus. We consider the satisfiability problem of its alternation-free fragment; in order to decide the problem, we introduce a global caching algorithm (generalizing ideas from [2] and [3]) which employs a non-trivial process of so-called propagation, keeping track of least fixed point literals (referred to as eventualities) in order to ensure their eventual satisfaction. Our algorithm operates over subsets of the ordinary Fischer-Ladner closure of the target formula, however, the model construction in the accompanying completeness proof yields so-called focussed models of size at most n*(4^n) where n denotes the size of the satisfiable target formula - in particular improving upon previously known lower bounds on model size for e.g. ATL and the alternation-free mu-calculus (both 2^O(n*log n)).
[1] C. Cirstea, C. Kupke, and D. Pattinson. EXPTIME tableaux for the coalgebraic μ-calculus.
[2] M. Lange, C. Stirling. Focus games for satisfiability and completeness of temporal logic.
[3] R. Goré, C. Kupke, D. Pattinson, L. Schröder. Global Caching for Coalgebraic Description Logics.
Title: Dependency as question entailment
Speaker: Ivano Ciardelli (ILLC, University of Amsterdam)
Over the past few years, a tight connection has emerged between logics of dependency and logics of questions. In this talk, I will show that this connection, far from being an accident, stems from a fundamental relation between dependency and questions: once we expand our view on logic by bringing questions into the picture, dependency emerges as a facet of the familiar notion of entailment, namely, entailment among questions. Besides providing a neat and insightful conceptual picture, this perspective yields the tools for a general and well-behaved logical account of dependencies.
Title: Up-to techniques for bisimulations with silent moves
Speaker: Daniela Petrişan (Raboud University, Nijmegen)
Bisimulation is used in concurrency theory as a proof method for establishing behavioural equivalence of processes. Up-to techniques can be seen as a means of optimizing proofs by coinduction. For example, to establish that two processes are equivalent one can exhibit a smaller relation, which is not a bisimulation, but rather a bisimulation up to a certain technique, say `up-to contextual closure'. However, the up-to technique at issue has to be sound, in the sense that any bisimulation up-to should be included in a bisimulation.
In this talk I will present a general coalgebraic framework for proving the soundness of a wide range of up-to techniques for coinductive unary predicates, as well as for bisimulations. The specific up-to techniques are obtained using liftings of functors to appropriate categories of relations or predicates. In the case of bisimulations with silent moves the situation is more complex. Even for simple examples like CCS, the weak transition system gives rise to a lax bialgebra, rather than a bialgebra. In order to prove that up-to context is a sound technique we have to account for this laxness. The flexibility and modularity of our approach, due in part to using a fibrational setting, pays off: I will show how to obtain such results by changing the base category to preorders.
This is joint work with Filippo Bonchi, Damien Pous and Jurriaan Rot.
Title: Strong Completeness for Iteration-free Coalgebraic Dynamic Logics.
Speaker: Helle Hansen (TU Delft)
We present a (co)algebraic treatment of iteration-free dynamic modal logics such as Propositional Dynamic Logic (PDL) and Game Logic (GL), both without star. The main observation is that the program/game constructs of PDL/GL arise from monad structure, and the axioms of these logics correspond to certain compatibilty requirements between the modalities and this monad structure. Our main contribution is a general soundness and strong completeness result for PDL-like logics for T-coalgebras where T is a monad and the "program" constructs are given by sequential composition, test, and pointwise extensions of operations of T.
Title: Residuated Basic Algebras
Speaker: Minghui Ma (Southwest University)
Visser's basic propositional logic BPL is the subintuitionistic logic which is characterized by the class of all transitive Kripke frames. Algebras for BPL are called basic algebras (BCA), which are distributive lattices with strict implication (called basic implication). The basic implication is a binary operator over a bounded distributive lattice satisfying certain conditions. It is not intuitionistic implication and hence not the right residual of conjunction.
We introduce residuated basic algebras (RBA) which are defined as distributive lattice ordered residuated groupoids with additional axioms for the product operator. Those additional axioms correspond to some axioms for basic algebras. The variety RBA has the finite embeddability property, and so does the variety BCA.
The logic RBL determined by the variety RBA can be shown as a conservative extension of BPL. There are several ways for proving the conservativity. One typical way is to apply the method of canonical extension. Another way is realized by Kripke semantics. Then we present a Gentzen-style sequent system for RBL, from which a sequent calculus for BPL is obtained if we drop the rules for additional operators. The sequent calculus for RBL enjoys cut elimination and the subformula property.
One application is that we can use the sequent calculus for RBL and the conservativity to prove that the intuitionistic logic is embeddable into BPL. The embedding is defined on sequents. It follows that classical propositional logic is also embeddable into BPL. The above is my joint work with Zhe Lin (Institute of Logic and Cognition, Sun Yat-Sen University, Guangzhou, China).
Our approach can be extended to the minimal subintuitionistic logic which is sound and complete with respect to the class of all Kripke frames. Moreover, we can define a syntax of propositional formulas such that we can get sequent calculi for extensions of the minimal subintuitionistic logics which enjoy the cut elimination and some other properties.
Finally, I will comment on some embedding theorems for extension of the minimal subintuitionistic logics into classical modal logics.
Title: Non-self-referential realizable frangments of modal and intuitionistic logics
Speaker: Junhua Yu (Tsinghua University)
In the framework of justification logic, formula t:F generally means that term t is a justification of formula F. What interesting is that t may also occur in F, giving the formula t:F(t) a self-referential meaning, i.e., t is a justification of assertion F about t itself. This kind of self-referential formulas are necessary for justification logic to offer constructive semantics for many modal and intuitionistic logics via Artemov's realization. In this talk, we will see a research on fragments of modal and intuitionistic logics consists of theorems that are free of self-referentiality via realization.
Title: Modal Logics for Presheaf Categories
Speaker: Giovanni Cina (ILLC, University of Amsterdam)
In this paper we investigate a Modal Logic perspective on presheaf categories. We show that, for a given small category C, the category of presheaves over C can be embedded into the category of transition systems and functional bisimulations. We characterize the image of such embedding, defining what we call "typed transition systems arising from C", and prove that $Set^{C^{op}}$ is equivalent to the category of such typed transition systems. Coupled with the results inprevious papers, this equivalence suggests a procedure to turn a transition system into a deterministic one.
Typed transition systems seem to offer a general relational semantics for typed processes. For this reason we propose a modal logic for typed transition systems arising from C, called $LTTS^{C}$. We prove that a first version of this logic, in which we add an infinitary rule, is sound and strongly complete for the class of these relational structures. Removing the infinitary rule we obtain a second system which is sound and weakly complete.
Title: Completeness and Incompleteness in Nominal Kleene Algebra
Speaker: Dexter Kozen (Cornell University)
Gabbay and Ciancia (2011) presented a nominal extension of Kleene algebra as a framework for trace semantics with dynamic allocation of resources, along with a semantics consisting of nominal languages. They also provided an axiomatization that captures the behavior of the scoping operator and its interaction with the Kleene algebra operators and proved soundness over nominal languages. In this work we show that the axioms are complete and describe the free language models. (Joint work with Konstantinos Mamouras and Alexandra Silva)
Title: Polyhedra: from geometry to logic
Speaker: Andrea Pedrini (University of Milan)
We study the Stone-Priestley dual space of the lattice of subpolyhedra of a compact polyhedron, with motivations coming from geometry, topology, ordered-algebra, and non-classical logic. We give a geometric representation of the spectral space of prime filters of the lattice of subpolyhedra of a compact polyhedron in terms of directions in the Euclidean space.
From the perspective of algebraic logic, our contribution is a geometric investigation of lattices of prime theories in Lukasiewicz logic, possibly extended with real constants. We use the geometry of subpolyhedra to interpret provability in Lukasiewicz infinite-valued propositional logic into Intuitionistic propositional logic.
Title: Exact Unification Type and Admissible rules
Speaker: Leonardo Cabrer (University of Florence, Italy)
(Joint work with George Metcalfe)
Motivated by the study of admissible rules, we introduce a new hierarchy of "exact" unification types. The main novelty of this hierarchy is the definition of order between unifiers: a unifier is said more exact than another unifier if all identities unified by the first are unified by the second. This simple change perspective, from syntactic to semantic to compare unifier has a large number of consequences. Exact unification has two important features: firstly, on each problem exact unification type is always smaller than or equal to the classical unification type, and secondly, there are equational theories having unification problems of classical unification type zero or infinite but whose exact unification type is finite. We will present examples of equational classes distinguishing the two hierarchies.
We will also present a Ghilardi-style algebraic interpretation of this hierarchy that uses exact algebras rather than projective algebras.
Title: PDL has Craig Interpolation since 1981
Speaker: Malvin Gattinger (ILLC, University of Amsterdam)
Many people believe it to be an open question whether Propositional Dynamic Logic has Craig Interpolation. At least three proofs have been attempted and two of them published but all of them have also been claimed to be wrong, both more or less publicly.
We recover a proof originally presented in a conference paper by Daniel Leivant (1981). The main idea and method here is still the same: Using the small model property we obtain a finitary sequent calculus for PDL. Furthermore, in proofs of star-formulas we find a repetitive pattern that allows us to construct interpolants. Our new presentation fixes many details and uses new results to simplify and clarify the proof. In particular we defend the argument against a criticism expressed by Marcus Kracht (1999), to show that the method indeed does not only apply to finitary variants of PDL but covers the whole logic.
Title: Changing a Semantics: Opportunism, or Courage?
Speaker: Johan van Benthem (ILLC, University of Amsterdam)
Henkin's models for higher-order logic are a widely used technique, but their status remains a matter of dispute. This talk reports on work with Hajnal Andréka, Nick Bezhanishvili & Ístvan Németi on the scope and justification of the method (ILLC Tech Report, to appear in M. Mazano et al. eds., "Henkin Book"). We will look at general models in terms of 'intended models', calibrating proof strength, algebraic representation, lowering complexity of core logics, and absoluteness. Our general aim: general perspectives on design of generalized models in logic, and links between these. We state a few new results, raise many open problems, and, in particular, explore the fate of generalized models on a less standard benchmark: fixed-point logics of computation.
Lit. http://www.illc.uva.nl/Research/Publications/Reports/PP-2014-10.text.pdf
Title: Axiomatizations of intermediate logics via the pseudo-complemented lattice reduct of Heyting algebras
Speaker: Julia Ilin (ILLC, University of Amsterdam)
In this work we study Boolean algebras (BAs) with a binary relation satisfying some of Heyting algebras. The generalized canonical formulas encode the reduct structure of the Heyting algebra fully, but encode the behavior of the remaining structure only partially. Such canonical formulas were studied for the meet-implication reduct and the meet-join reduct of Heyting algbras. In both cases the corresponding canonical formulas are able to axiomatize all intermediate logics.
In this talk, we will see that also the meet-join-negation reduct of Heyting algebras can be used in a similar way. Moreover, we investigate new classes of intermediate logics that the meet-join-negation reduct of Heyting algebras gives rise to.
[1] G. Bezhanishvili and N. Bezhanishvili. "An algebraic approach to canonical formulas: Intuitionistic case." In: Review of Symbolic Logic 2.3 (2009).
[2]G. Bezhanishvili and N. Bezhanishvili. "Locally finite reducts of Heyting algebras and canonical formulas". To appear in Notre Dame Journal of Formal Logic. 2014.
Title: Duality and canonicity for Boolean algebra with a relation
Speaker: Sumit Sourabh (ILLC, University of Amsterdam)
In this work we study Boolean algebras (BAs) with a binary relation satisfying some properties. In particular, we introduce categories of BA with an operator relation (BAOR) which preserves finite joins in each coordinate and BA with a dual operator relation (BADOR) which preserves finite meets in each coordinate. The maps in these categories are Boolean homomorphisms preserving the relation. It turns out that well- known algebras such as de Vries algebras, contact algebras, lattice subordinations and Boolean proximity lattices are examples of objects in these categories. Using the char- acteristic map of the relations, we show that the category of BAOR (resp. BADOR) is isomorphic to category of BA with binary operators (resp. dual operators) into the 2-element BA 2. This allows us to import results from the theory of BAOs into our setting.
We show that both finite BAOR and BADOR are dual to the category of Kripke frames with weak p-morphisms. In the infinite case, we show that both BAOR and BAOR are dual to the category of Stone spaces with closed binary relations and continuous maps. We also define canonical extensions of BAOR and BADOR, and show that Sahlqvist inequalities are canonical.
Title: Adding the Supremum to Interpretability Logic
Speaker: Paula Henk (ILLC, University of Amsterdam)
The relation of relative interpretability was first introduced and carefully studied by Tarski, Mostowski and Robinson. It can be seen as a definition of what it means for one theory to be at least as strong as another one. A collection of finite extensions of Peano Arithmetic (PA) that are equally strong in this sense is called a degree. The collection of all degrees (together with the relation of interpretability) is a distributive lattice.
We are interested in using modal logic for investigating what is provable in PA about the lattice of its interpretability degrees. Part of the answer is provided by the interpretability logic ILM. However, the existence of the supremum in the lattice of interpretability degrees is not expressible in ILM. In order to eliminate this deficiency, we want to add to ILM a new modality. As it turns out, a unary modality is sufficient for this purpose. Furthermore, the dual of this unary modality satisfies the axioms of GL (provability logic), and can be therefore seen as a nonstandard provability predicate. The final part of the talk is concerned with the bimodal logic that results when adding to GL logic a unary modality whose intended meaning is such a nonstandard provability predicate.
Title: Duality for Logic of Quantum Actions
Speaker: Shengyang Zhong (ILLC, University of Amsterdam)
Traditionally, study in quantum logic and foundations of quantum theory focuses on the lattice structure of testable properties of quantum systems. The essence of this structure is captured in the definition of a Piron lattice [2]. In 2005, Baltag and Smets [1] proposed to organize states of a quantum system into a labelled transition system with tests of properties and unitary evolutions as transitions, whose non-classical nature is caused by the strangeness of quantum behaviour. Moreover, the results in [1] hint at a representation theorem for Piron lattices using this kind of labelled transition systems, called quantum dynamic frames.
In this talk, I will present an extension of the work of Baltag and Smets into a duality result. I will define four categories, two of Piron lattices and two of quantum dynamic frames, and show two dualities of two pairs from them. This result establishes, on one direction of the duality, that quantum dynamic frames capture many essentials of quantum systems; on the other direction, it justify a more dynamic and intuitive way of thinking about Piron lattices. Moreover, this result is very useful for developing a logic of quantum actions.
This talk is based on an on-going, joint paper of Jort Bergfeld, Kohei Kishida, Joshua Sack and me.
[1] Baltag, Alexandru, and Sonja Smets, `Complete axiomatizations for quantum actions', International Journal of Theoretical Physics, 44 (2005), 12, 2267-2282.
[2] Piron, Constantin, Foundations of Quantum Physics, W.A. Benjamin Inc., 1976.
Title: Many-valued modal logic over residuated lattices via duality (joint work with Andrew Craig - University of Johannesburg)
Speaker: Umberto Rivieccio (Delft University of Technology)
One of the latest and most challenging trends of research in non-classical logic is the attempt of enriching many-valued systems with modal operators. This allows one to formalize reasoning about vague or graded properties in those contexts (e.g., epistemic, normative, computational) that require the additional expressive power of modalities. This enterprise is thus potentially relevant not only to mathematical logic, but also to philosophical logic and computer science. A very general method for introducing the (least) many-valued modal logic over a given finite residuated lattice is described in [1]. The logic is defined semantically by means of Kripke models that are many-valued in two different ways: the valuations as well as the accessibility relation among possible worlds are both many-valued. Providing complete axiomatizations for such logics, even if we enrich the propositional language with a truth-constant for every element of the given lattice, is a non-trivial problem, which has been only partially solved to date. In this presentation I report on ongoing research in this direction, focusing on the contribution that the theory of natural dualities can give to this enterprise. I show in particular that duality allows us to adapt the method used in [1] to prove completeness with respect to local modal consequence, obtaining completeness for global consequence, too (a problem that, in full generality, was left open in in [1]). Besides this, our study is also a contribution towards a better general understanding of quasivarieties of (modal) residuated lattices from a topological perspective.
References [1] F. Bou, F. Esteva, L. Godo, and R. Rodrìguez. On the minimum many-valued modal logic over a finite residuated lattice. Journal of Logic and Computation, 21(5):739-790, 2011.
Title: Using Admissible Rules to Characterise Logics
Speaker: Jeroen Goudsmit (Utrecht University)
The admissible rules of a logic are those rules under which the set of its theorems are closed. A most straightforward example is the rule A ∨ B / {A,B}, which is admissible precisely if the logic enjoys the disjunction property. Iemhoff (2001) showed that IPC can be characterised in terms of its admissible rules, by showing that it is the sole intermediate logic which admits a certain set of rules.
We will present a similar characterisation for the Gabbay-de Jongh logics, also known as the logics of bounded branching. Our reasoning is based on the characterisation of IPC by Skura (1989), who presented a refutation system for IPC. In particular we show how one can inductively define the set of formulae that are non-derivable in the Gabbay-de Jongh logics. This talk is based on the work described in "Admissibility and Refutation: Some Characterisations of Intermediate Logics".
Title: Uniform Interpolation for Coalgebraic Fixpoint Logic
Speaker: Fatemeh Seifan (ILLC, University of Amsterdam)
In this talk we will use the connection between automata and logic to prove that a wide class of coalgebraic fixpoint logics enjoy the uniform interpolation.To this aim, first we generalize one of the central results in coalgebraic automata theory, namely closure under projection, which is known to hold for weak-pullback preserving functors, to a more general class of functors, i.e.; functors with quasi-functorial lax extensions. Then we will show that closure under projection implies definability of the bisimulation quantifier in the language of coalgebraic fixpoint logic, and finally we prove the uniform interpolation theorem.
Title: Free algebras for Gödel-Löb provability logic
Speaker: Sam van Gool (LIAFA, Université Paris-Diderot & Radboud Universiteit Nijmegen)
We give a construction of finitely generated free algebras for Gödel-Löb provability logic, GL. On the semantic side, this construction yields a notion of canonical graded model for GL and a syntactic definition of those normal forms which are consistent with GL. Our two main techniques are incremental constructions of free algebras and finite duality for partial modal algebras. In order to apply these techniques to GL, we use a rule-based formulation of the logic GL by Avron (which we simplify slightly), and the corresponding semantic characterization that was recently obtained by Bezhanishvili and Ghilardi.
Title: Canonical rules for modal logic
Speaker: Nick Bezhanishvili (ILLC, University of Amsterdam)
Date: Thursday, 20 March, 2014.
In this talk I will discuss how to transform the method of implication-free canonical formulas for intuitionistic logic into the setting of modal logic.
Title: From free algebras to proof bounds.
Speaker: Silvio Ghilardi (University of Milano)
Date: Thursday, 20 February, 2014.
(This is joint work with Nick Bezhanishvili). In the first part of our contribution, we review and compare existing constructions of finitely generated free algebras in modal logic focusing on step-by-step methods. We discuss the notions of step algebras and step frames arising from recent investigations, as well as the role played by finite duality. A step frame is a two-sorted structure which admits interpretations of modal formulae without nested modal operators.
In the second part of the contribution, we exploit the potential of step frames for investigating proof-theoretic aspects. This includes developing a method which detects when a specific rule-based calculus Ax axiomatizing a given logic L has the so-called bounded proof property. This property is a kind of an analytic subformula property limiting the proof search space. We prove that every finite conservative step frame for Ax is a p-morphic image of a finite Kripke frame for L iff Ax has the bounded proof property and L has the finite model property. This result, combined with a "step version" of the classical correspondence theory, turns out to be quite powerful in applications. For simple logics such as K, K4, S4, etc, establishing basic matatheoretical properties becomes a completely automatic task (the related proof obbligations can be instantaneously discharged by current first-order provers). For more complicated logics, some ingenuity is still needed, however we were able to successfully apply our uniform method to Avron's cut-free system for GL and to Goré's cut-free system for S4.3.
Title: Cut-elimination in circular proofs.
Speaker: Jérôme Fortier (UQAM / AMU)
Circularly-defined objects (e.g. inductive and coinductive types) live in the world of mu-bicomplete categories, where they arise from the following operations: finite products and coproducts, initial algebras, and final coalgebras. In the spirit of the Curry-Howard correspondence, Santocanale (2002) introduced a formal proof system for denoting arrows in such categories. The proofs in this system can be circular and yet sound. That means that you can prove some formulas from themselves, given that the cycles satisfy some condition analogous to an acceptance condition for parity games. However, the system was not full, which means that some arrows that must exist in any mu-bicomplete category could not be represented. We recently filled the system by adding the cut rule and modifying the condition on cycles. Not only does the new system remain sound and becomes full, but it also enjoys an automatic cut-elimination procedure that models the natural computation that arises from circular definitions. Some natural questions from this point are about expressiveness of such computations, which I will talk about.
(This is joint work with Luigi Santocanale.)
Title: Open maps, small maps and final coalgebras.
Speaker: Benno van den Berg (ILLC, University of Amsterdam)
Date: Thursday, 6 February, 2014.
In his book on non-well-founded sets Aczel proves a general final coalgebra theorem, showing that a wide class of endofunctors on the category of classes has a final coalgebra. I will discuss generalisations of this result to the setting of algebraic set theory and try to motivate why it is interesting to look at results at this level of generality.
Title: A coalgebraic view of characteristic formulas in equational modal fixed point logics.
Speaker: Sebastian Enqvist (ILLC, University of Amsterdam)
The literature on process theory and structural operational semantics abounds with various notions of behavioural equivalence and, more generally, simulation preorders. An important problem in this area from the point of view of logic is to find formulas that characterize states in finite transition systems with respect to these various relations. Recent work by Aceto et al. shows how such characterizing formulas in equational modal fixed point logics can be obtained for a wide variety of behavioural preorders using a single method. In this paper, we apply this basic insight from the work by Aceto et al. to Baltag's ``logics for coalgebraic simulation'' to obtain a general result that yields characteristic formulas for a wide range of relations, including strong bisimilarity, simulation, as well as bisimulation and simulation on Markov chains and more. We also provide conditions that allow us to automatically derive characteristic formulas in the language of predicate liftings for a given finitary functor. These latter languages have the advantage of staying closer to the more conventional syntax of languages like Hennessy-Milner logic.
Title: Coalgebriac Announcement Logics
Speaker: Facundo Carreiro (ILLC, University of Amsterdam)
In epistemic logic, dynamic operators describe the evolution of the knowledge of participating agents through communication, one of the most basic forms of communication being public announcement. Semantically, dynamic operators correspond to transformations of the underlying model. While metatheoretic results on dynamic epistemic logic so far are largely limited to the setting of Kripke models, there is evident interest in extending its scope to non-relational modalities capturing, e.g., uncertainty or collaboration. We develop a generic framework for non-relational dynamic logic by adding dynamic operators to coalgebraic logic. We discuss a range of examples and establish basic results including bisimulation invariance, complexity, and a small model property.
The talk is based on the paper published in ICALP 2013, available at: http://glyc.dc.uba.ar/facu/papers/coalg-announcements.pdf.
Title: An algebraic approach to cut-elimination for substructural logics.
Algebraic proof theory combines algebraic and proof theoretical techniques to prove results about substructural logics. In [1], the authors give an algebraic proof of cut-elimination theorem for substructural logics using quasi-completions of FL-algebras. It is also shown that quasi-completions of FL-algebras are isomorphic to their MacNeille completions. Another recent work [2], explores the connections between admissibility of cut rule on adding a structural rule to the calculus and preservation of the structural rule under MacNeille completion. In this talk, I will give an introduction to algebraic proof theory followed by an overview of preservation of equations under completion of algebras.
[1] F. Belardinelli, P. Jipsen and H. Ono, Algebraic aspects of cut elimination, Studia Logica 77(2) (2004), 209-240.
[2] A. Ciabattoni , N. Galatos and K. Terui, Algebraic proof theory for substructural logics: Cut-elimination and completions, Annals of Pure and Applied Logic, Volume 163, Issue 3, March 2012, pp. 266-290.
Title: Terminal Sequences and Their Coalgebras.
Speaker: Johannes Marti (ILLC, University of Amsterdam)
This talk starts with a review of the terminal sequence construction for an endofunctor. In nice cases the terminal sequence converges and yields the cofree coalgebras in the category of coalgebras for the endofunctor. These cofree coalgebras form a comonad whose category of Eilenberg-Moore coalgebras is isomorphic to the category of coalgebras for the original endofunctor. What is good about this comonads is that, under reasonable additional assumptions, we can use coequations to define subcomonads whose categories of Eilenberg-Moore coalgebras are covarieties of coalgebras.
In not so nice cases the terminal sequence does not converge and we do not obtain a comonad to specify covarieties. This not so nice cases include the powerset functor whose coalgebras are Kripke frames where modally definable classes of frames could be studied as covarieties.
I show that even in the not so nice cases we can define a category of coalgebras for a terminal sequence that plays an analogous role as the category of Eilenberg-Moore coalgebras for a comonad. Moreover, under the same additional assumptions as for comonads, the covarieties are categories of coalgebras for subsequences obtained from coequations.
Comonads and terminal sequences are both colax functors from a small 2-category to the 2-category of categories. Interestingly, there is a notion of coalgebra for any such colax functor. I give examples of relevant categories of coalgebras that need this full generality.
Title: Two isomorphism criteria for directed colimits.
Speaker: Luca Spada (ILLC, Amsterdam and University of Salerno)
Using the general notions of finite presentable and finitely generated object introduced by Gabriel and Ulmer in 1971, we prove that, in any category, two sequences of finitely presentable objects and morphisms (or two sequences of finitely generated objects and monomorphisms) have isomorphic colimits (=direct limits) if, and only if, they are confluent. The latter means that the two given sequences can be connected by a back-and-forth sequence of morphisms that is cofinal on each side, and commutes with the sequences at each finite stage. We illustrate the criterion by applying the abstract results to varieties (=equationally definable classes) of algebras, and mentioning applications to non-equational examples.
Title: A non-commutative Priestly duality
Speaker: Sam van Gool (Radbound University, Nijmegen)
Date: Wednesday February 13, 2013
In this talk on our recent paper*, I will describe a new Priestley-style duality for skew distributive lattices. Since its introduction in Stone's seminal papers from 1936 and 1937, duality theory has been important in the study of propositional logics beyond the classical, such as intuitionistic, modal, and substructural logics. It provides the mathematical framework for studying the intimate link between the syntax and semantics of a logic.
The results that I describe in this talk form a generalization of duality theory beyond the commutative case. This opens the door to applications of duality theory to logical systems in which the basic operations of conjunction and disjunction are no longer assumed to be commutative. Such systems are of interest because of their possible relevance to both quantum logic and dynamic epistemic logic.
*preprint available on http://arxiv.org/pdf/1206.5848
Title: Bilattices with modal operators
Speaker: Speaker: Umberto Rivieccio (University of Birmingham)
Date: Wednesday May 30, 2012
Some authors have recently started to consider modal expansions of the well-known Belnap four-valued logic, either with implication (S. Odintsov, H. Wansing et al.) or without it (G. Priest). Given that some bilattice logics are four-valued (conservative) expansions of the Belnap logic, we may wonder whether it makes sense to consider modal expansions of bilattice logics and their algebraic counterpart, which would be bilattices with modal operators. I will present a few ideas on how this can be done.
Title: Topological- and Neighborhood-Sheaf Semantics for First-Order Modal Logic
Speaker: Kohei Kishida (ILLC, Amsterdam)
This talk extends Tarski's classical topological semantics for propositional modal logic to first-order modal logic, with respect to the following two aspects: (i) It takes a sheaf over a topological space, and shows that such structures (or the category of them) model first-order modal logic by equipping points of the space with domains of individuals. (ii) It is also shown how topological semantics extends to the more general case of neighborhood semantics, at the level of sheaf semantics. These extensions provide semantics for the simple unions of first-order logic with S4 modal logic and with more general modal logics. Corresponding to the point-set and algebraic formulations of Tarski's topological semantics, the semantics of this paper will be presented in both point-set and topos-theoretic formulations.
The seminar is currently organized by Bahareh Afshari, Benno van den Berg, Nick Bezhanishvili, Jan Rooduijn and Yde Venema. If you're interested in giving a talk at the seminar or have any questions regarding the seminar please contact Jan at j.m.w.rooduijn[at]uva.nl.
If you want to subscribe to the A|C seminar mailing list please send an email to Jan at j.m.w.rooduijn[at]uva.nl with the subject "A|C seminar". In case you no longer wish to subscribe to the mailing list send an email to Jan with the subject "A|C seminar unsubscribe".
The A|C-seminar, under the Institute for Logic, Language and Computing (ILLC), located at Science Park 107 1098 XG Amsterdam, is responsible for the processing of personal data as shown in this privacy statement.
The A|C seminar no longer processes personal data via this website (http://events.illc.uva.nl/alg-coalg/), because it is no longer possible to leave personal data behind. We also do not use social media plugins.
The A|C seminar does not take decisions based on automated processing related to this website on matters that can have (significant) consequences for individuals, e.g. decisions taken by computer programs or systems, without involving a person (for example an organiser of the A|C-seminar).
The A|C-seminar does not use cookies or similar techniques on this website.
|
CommonCrawl
|
α-glucosidase and glycation inhibitory effects of costus speciosus leaves
Handunge Kumudu Irani Perera1,
Walgama Kankanamlage Vindhya Kalpani Premadasa1,2 &
Jeyakumaran Poongunran1,2
BMC Complementary and Alternative Medicine volume 16, Article number: 2 (2015) Cite this article
Hyperglycaemia is a salient feature of poorly controlled diabetes mellitus. Rate of protein glycation is increased with hyperglycaemia leading to long term complications of diabetes. One approach of controlling blood glucose in diabetes targets at reducing the postprandial spikes of blood glucose. The objectives of this study were to assess the in vitro inhibitory effects of Costus speciosus (COS) leaves on α-amylase and α-glucosidase activities, fructosamine formation, protein glycation and glycation-induced protein cross-linking.
Methanol extracts of COS leaves were used. Inhibitory effects on enzyme activities were measured using porcine pancreatic α-amylase and α-glucosidase from Saccharomyces cerevisiae in the presence of COS extract. Percentage inhibition of the enzymes and the IC50 values were determined. In vitro protein glycation inhibitory effect of COS leaves on early and late glycation products were measured using bovine serum albumin or chicken egg lysozyme with fructose. Nitroblue tetrazolium was used to assess the relative concentration of fructosamine and polyacrylamide gel electrophoresis was used to assess the degree of glycation and protein cross-linking in the reaction mixtures.
α-Glucosidase inhibitory activity was detected in COS leaves with a IC50 of 67.5 μg/ml which was significantly lower than the IC50 value of Acarbose (p < 0.01). Amylase inhibitory effects occurred at a comparatively higher concentration of extract with a IC50 of 5.88 mg/ml which was significantly higher than the IC50 value of Acarbose (p < 0.01). COS (250 μg/ml) demonstrated inhibitory effects on fructosamine formation and glycation induced protein cross-linking which were in par with 1 mg/ml aminoguanidine were detected.
Methanol extracts of COS leaves demonstrated in vitro inhibitory activities on α-glucosidase, fructosamine formation, glycation and glycation induced protein cross-linking.
These findings provide scientific evidence to support the use of COS leaves for hypoglycemic effects with an added advantage in slowing down protein glycation.
Diabetes mellitus is a chronic disease which causes millions of deaths worldwide each year as a result of the associated complications [1]. Persistently elevated blood glucose concentration is a salient feature of poorly controlled diabetes. As a result, protein glycation is commenced with the non-enzymatic addition of sugar molecules into proteins at an accelerated speed, as the rate of this process depends on the concentration of sugar. In the early stages of glycation, the sugar reacts with free amino groups of proteins, to form stable Amadori products such as fructosamine [2]. Glycation proceeds over a period of time which leads to the production of advance glycation end products (AGEs). AGEs cause irreversible structural and functional damage to the affected molecules [3]. Protein cross-linking occurs at the later part of glycation, further aggravating the tissue damage especially when the cross-links are formed in long-lived proteins, such as collagen [4]. Protein glycation is identified as a primary cause for the development of chronic diabetic complications such as retinopathy, nephropathy and cardio vascular diseases [5]. Glycation induced cross-linking cause extra cellular matrix proteins rigid and less susceptible to enzymatic digestion. This leads to thickening of basement membranes affecting organ functions as observed in diabetic nephropathy [6]. Furthermore the role of AGEs has been discussed on aging with a particular emphasis on skin aging [7] and age related neurodegenerative diseases [8].
Therapeutic agents used for diabetes, target to bring down the blood glucose concentrations as close as to normal physiological levels [9]. Some antidiabetic drugs target key enzymes hydrolyzing the carbohydrates such as α-amylase and α-glucosidase in order to decrease the post-prandial elevation of blood glucose [10, 11]. α-Amylase hydrolyses the initial hydrolysis of starch into α-limit dextrins, maltose and maltotriose [12]. α-Glucosidase catalyzes the release of absorbable monosaccharides from the substrate [13]. As a result, postprandial spikes of blood glucose appear during the digestion of dietary starch. Inhibition of α-amylase and α-glucosidase delays carbohydrate digestion and decrease glucose absorption bringing down the post-prandial elevation of blood glucose. Inhibition of protein glycation is another therapeutic approach which can delay the progression of diabetic complications. However, the synthetic drugs which act as inhibitors of amylase, glucosidase and glycation show side effects in addition to the desirable effects [3, 11].
Natural remedies used since ancient times became popular as effective, inexpensive and safe mode of treating diabetes [14]. It is recognized that there are more than 1,200 species of plants with hypoglycemic activity [15]. A review on the medicinal plants used to treat diabetes by ayurvedic and traditional physicians in Sri Lanka has reported the use of approximately 126 antidiabetic plants including Costus speciosus leaves [16]. However, most of these are used in traditional practice without a proper scientific scrutiny [17].
Costus speciosus (COS) or Cheilocostus speciosus is used to treat various diseases and are used as an ornamental plant too [18]. It belongs to the family Costaceae (Zingiberaceae). The genus Costus consists of approximately 175 species [19]. COS is a plant that is known as Thebu in Sinhala and crepe ginger or spiral ginger in English. Leaves of COS are arranged spirally around the trunk. Rhizome of COS is reported to possess hypoglycemic properties. Leaves of COS are popular among Sri Lankans which are included in the main meals as a salad [20–22]. Consumption of COS leaves are believed to be effective in controlling the blood glucose and lipid levels [21, 23]. A recent study conducted in Sri Lanka has shown that the usage of herbal medicines is 76 % among a group of 252 type 2 diabetic patients investigated who were on one or more oral hypoglycaemic agents [24]. Among them 47 % have consumed COS leaf as a salad in their main meals [24]. It is known that diabetic patients eat one leaf daily in India to keep the blood glucose concentration low [25]. COS was among three commonly used (>20 % usage) plants to lower blood glucose concentration by the Puerto Rican population [26]. Remedies prepared from two plants including COS are commonly known as "insulin" by the studied population in Puerto Rican [26]. When investigated, it was recognized that a daily dosage of approximately 0.8 of a COS leaf (~2.5 g fresh leaf) is consumed [26].
Several investigations have proven the hypoglycaemic effects of COS rhizome in alloxan or streptozotocin induced diabetic rats [27–30]. However, evidence to prove the effectiveness of COS leaf are lacking. Furthermore there are no reports on the antiglycation potential of COS leaves as per up to date literature. The objective of this study was to assess the in vitro inhibitory effects of Costus speciosus leaves on α-amylase and α-glucosidase activities, fructosamine formation, protein glycation and glycation-induced protein cross-linking.
Leaves of Costus speciosus (Koenig) Smith (Family Costaceae) were collected in March 2013 from Moratuwa, Sri Lanka, authenticated by the Deputy Director/National Herbarium and the voucher samples (Voucher No. HKIP-SLS-BIO-2013-02) were deposited at the National Herbarium, Royal Botanical Gardens, Peradeniya, Sri Lanka.
Preparation of methanol extracts
COS leaves were collected, cleaned and dried under shade for approximately 10 days. Dry leaves were ground using an electric grinder. Dry powder (10 g) of COS leaves was extracted three times with methanol (100 ml) using the sonicator. Filtered methanol was evaporated using the rotary evaporator (Buchi RII) at a temperature below 50 °C [31]. Dry form of the crude methanol was resuspended in phosphate buffer (pH 7.4) to the required working concentrations prior to the experiments.
Measurement of α-Amylase inhibitory effect of COS leaves
α-Amylase inhibitory effect of COS extract was assessed using the pre-incubation method as described by Geethalakshmi et al. [32] from the method adapted from Bernfeld [33]. Porcine pancreatic α-amylase (Sigma) in ice-cold distilled water (5 unit/ml solution) and potato starch (1 % w/v) in 20 mM phosphate buffer (pH 6.9) with 6.7 mM sodium chloride were used. COS extract (40 μl) was mixed with 40 μl α-amylase and 80 μl of 20 mM phosphate buffered saline (pH 6.9) and pre-incubated for 15 min at 37 °C. Final concentration of COS extract used was 1 to 6.5 mg/ml. Starch (40 μl) was added after the pre-incubation and the reaction mixtures were incubated for 15 min at 37 °C. Dinitrosalicylic acid colour reagent was added (100 μl) to the tubes and incubated at 85 °C for 15 min. Distilled water (900 μl) was added to the tubes and the absorbance was measured at 540 nm. Appropriate blanks and controls were carried out. Acarbose (Sigma) was used as the standard inhibitor.
Measurement of α-Glucosidase inhibitory effect of COS leaves
α-Glucosidase inhibitory effect of COS extract was assessed using the method described by Elya et al. [34]. Sodium phosphate buffer (pH 6.8) (200 μl) and 120 μl of 1 mM p-Nitrophenyl α-D-Glucopyranoside (Sigma) was added to the tubes. Plant extract (40 μl) was added to the test and test blank. Tubes were pre-incubated for 15 min at 37 °C and then 40 μl of 0.1 U α-glucosidase from Saccharomyces cerevisiae (Sigma) was added to the tests and the control. Final concentration of COS extract used was 50 to 100 μg/ml. The reaction mixtures were incubated for another 15 min at 37 °C and the reaction was terminated using 100 mM sodium carbonate (800 μl). Absorbance was measured at 405 nm. Acarbose was used as the standard inhibitor.
Detection of inhibitory effect of COS leaves on fructosamine formation
Fructosamine formation during the incubation of proteins with sugar was measured using the method described by Meeprom et al. [2] with modifications. Briefly, chicken egg lysozyme (Sigma) was incubated with 500 mM fructose in 200 mM phosphate buffer (pH 7.4) containing 0.02 % sodium azide. Incubation was carried out in the dark in the presence or absence of 250 μg/ml or 5 mg/ml COS extract at 37 °C for 7 days. Aminoguanidine (AG) was used at 1 mg/ml as the positive control. Corresponding blanks were prepared in the absence of fructose. Aliquots were collected at day 5 and analyzed for the reduction of nitroblue tetrazolium. Test samples were mixed with the 0.1 M sodium carbonate buffer (pH 10.35) and left for 5 min. Appropriate blanks were prepared by adding fructose to the test blanks just before the assay. Nitroblue tetrazolium (0.5 M) in 0.1 M sodium carbonate buffer (pH 10.35) was added to the reaction mixtures and incubated at 37 °C for 15 min. Absorbance at 530 nm was measured. Percentage inhibition of the relative fructosamine concentration in the presence of COS and AG was calculated.
Calculation of percentage inhibition*
Percentage inhibition was calculated using the following formula.
$$ \%\kern0.5em \mathrm{Inhibition}=100-\left[\frac{\left(\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{Test}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{Test}\ \mathrm{Blank}\right) \times 100}{\left(\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{Control}-\mathrm{Absorbance}\ \mathrm{of}\ \mathrm{Control}\ \mathrm{Blank}\right)}\right] $$
*Applied the formula to calculate the enzyme inhibition and % inhibition of relative concentration of fructosamine.
Calculation of IC50
The concentration of the extract that inhibits 50 % of the enzyme activity (IC50) was measured using a series of suitable extract concentrations. IC50 values were determined by plotting percent inhibition (Y axis) versus log10 extract concentration (X axis) and calculated by logarithmic regression analysis from the mean inhibitory values.
Enzyme inhibitory assays and the fructosamine inhibitory assay were performed three times. Each experiment was carried out in triplicates. Statistical analysis was performed using t-test. p < 0.05 was considered as significant.
Detection of glycation inhibitory effect of COS leaves
Glycation of bovine serum albumin (BSA) (Sigma) was undertaken in vitro as described by Wijetunge and Perera [35]. In brief, BSA was incubated with fructose (500 mM) in 200 mM phosphate buffer (pH 7.4) containing 0.02 % sodium azide at 37 °C for 30 days. Incubations were conducted in the presence or absence of 1 or 5 mg/ml COS extract. AG (1 mg/ml) was used as the positive control. Corresponding blanks were prepared in the absence of fructose. Aliquots were collected at day 12 or 13 and day 30 and analyzed for the degree of glycation, using polyacrylamide gel electrophoresis (PAGE) under non-denaturing conditions. Electrophoresis was carried out with the Enduro vertical gel electrophoresis system- E2010-P according to the standard Laemmli method using 10 % polyacrylamide gels [36]. Gels were stained with Coomassie brilliant blue. Changes in the migration position of the BSA bands in the aliquots were compared. Approximate percentage inhibition of glycation was assessed in comparison to the uninhibited reaction, based on the decrease in migration of BSA in the presence of COS extract. Experiments were repeated three times.
Detection of glycation induced protein cross-linking inhibitory effect of COS leaves
Glycation induced protein cross-linking inhibitory effect of COS was assessed using the method described by Perera and Ranasinghe [37]. Briefly, chicken egg lysozyme (Sigma) was incubated with fructose in the presence or absence of 250 μg/ml or 2 mg/ml COS extracts for 14 days. Other conditions were as described in the fructosamine assay and incubated for 14 days. Aliquots were collected at day 6 and 14 and analyzed for the appearance of high molecular weight products using sodium dodecyl polyacrylamide gel electrophoresis (SDS-PAGE). Electrophoresis was carried out with the Enduro Vertical Gel Electrophoresis system- E2010-P according to the standard Laemmli method using 12 % SDS-polyacrylamide gels [36]. Gels were stained with Coomassie brilliant blue. Appearance of high molecular weight products of lysozyme in the aliquots was compared. Experiments were repeated three times.
Yield of the methanol extract was 15.8 % from the dry COS leaf powder. Dry extract was resuspended in phosphate buffer immediately before the assays.
α-Amylase inhibitory effect of COS leaves
Even though there was amylase inhibitory effect observed with COS, the IC50 for amylase inhibition of COS was 5.88 mg/ml which was significantly higher than the IC50 value of the standard inhibitor Acarbose (262.54 μg/ml) for porcine pancreatic amylase (p < 0.01). The percent α-amylase inhibitions (%) of COS at varying concentrations are shown in Fig. 1.
The percent α-amylase and α-glucosidase inhibitions (%) of COS at varying concentrations. Data are indicated as mean percentage inhibition. Final concentrations of the extract used for amylase inhibitory assay were 1, 2, 3, 4, 5, 6.5 mg/ml and for glucosidase inhibitory assay were 50, 60, 70, 80, 100 μg/ml
α-Glucosidase inhibitory effect of COS leaves
α-Glucosidase inhibitory effect observed with COS leaves was significantly higher than that of α-amylase inhibitory effects (p < 0.01). IC50 for glucosidase inhibition of COS was 67.5 μg/ml which was significantly lower than the IC50 value of the standard inhibitor Acarbose (208.53 μg/ml) for yeast glucosidase (p < 0.01). The percent α-glucosidase inhibitions (%) of COS at varying concentrations are shown in Fig. 1.
Inhibitory effect of COS leaves on fructosamine formation
Fructosamine formation was compared using aliquots collected on day 5 of the incubation. Difference between the absorbance of the test and blank is proportionate to the relative concentration of fructosamine present in the aliquot. There was a significant reduction of the relative fructosamine concentration compared to the uninhibited control (p < 0.01) with an inhibition of 53.42 % in the presence of 250 μg/ml and 89.95 % in the presence 5 mg/ml of COS extract (Fig. 2). AG showed an inhibition of fructosamine formation by 47.95 % (Fig. 2).
Effect of COS on the formation of fructosamine. Relative concentration of fructosamine formed was considered to be proportionate to the difference between the test (T) and blank (B). T-B of aliquots collected on day 5 of the incubation was compared. T-B of the uninhibited reaction with no extract (COS-0) was expressed as 100 %. COS-0.25: In the presence of 250 μg/ml COS, COS-5: In the presence 5 mg/ml of COS. AG: In the presence of AG
Glycation inhibitory effect of COS leaves
Migration of BSA towards the anode was increased (downward arrow) in the presence of fructose (Fig. 3). Previously we have reported that this increase is proportionate to the degree of glycation [35]. The increase in BSA migration was retarded in the presence of COS (upward arrow) indicating glycation inhibition (Fig. 3). This inhibition was similar to that of the standard inhibitor AG (results not shown). Such a change in migration did not occur in the absence of fructose even when the plant extract (5 mg/ml) was included in the reaction mixture (Fig. 3). Inhibitory effects of COS was observed with both 1 and 5 mg/ml concentrations and the inhibition lasted even on day 30 (Fig. 3). However, the degrees of inhibition seem to reduce with longer incubation and lower concentration of COS, as denoted by the increase in the gap between the height of the two arrows in Fig. 3b and c compared to that of a.
Glycation inhibitory effect of COS. PAGE was conducted. a: with aliquots collected on day 12 with 5 mg/ml extracts. b: with aliquots collected on day 30 with 5 mg/ml extracts. c: with aliquots collected on day 13 with 1 mg/ml extracts. -P: in the absence of COS, COS: in the presence of COS, −Fructose: in the absence of fructose, +Fructose: in the presence of fructose. Experiment was repeated three times
Glycation induced protein cross-linking inhibitory effect of COS leaves
High molecular weight products of lysozyme were formed in the presence of fructose (Fig. 4). Previously we have reported that the amount of such products formed is proportionate to the degree of glycation induced protein cross-linking [37]. These products represented the dimer, trimer and tetramer of lysozyme as demonstrated previously using molecular weight markers [37]. There was a reduction in the amount of high molecular weight products formed in the presence of AG and COS leaf extract indicating inhibition of protein cross-linking. The inhibition observed after 14 day incubation with 2 mg/ml COS extract was in parallel with that of AG (Fig. 4a). Inhibitory effect of COS was observed even at a lower concentration (250 μg/ml) of extract (Fig. 4b). High molecular weight products were not detected in the absence of fructose even when COS was included in the reaction mixture (Fig. 4b).
Glycation induced protein cross-linking inhibitory effect of COS. SDS PAGE was conducted. a: with aliquots collected on day 14 with 2 mg/ml extract. b: with aliquots collected on day 6 with 250 μg/ml extract. -P: in the absence of COS, COS: in the presence of COS, −F: in the absence of fructose, +F: in the presence of fructose. Experiment was repeated three times
Hyperglycaemia is an independent risk factor in the development of chronic diabetic complications. Therefore the management of type 2 diabetes relies on the maintenance of blood glucose concentration in a normal or near normal level [9]. COS leaves are consumed in the Sri Lankan diet [20–22] and are used to treat diabetes [16, 24, 26]. However, scientific evidence to support the hypoglycaemic effects of COS leaves are lacking. Some plants are known to have glycation inhibitory effects which will provide additional benefit. Antiglycation effects may delay glycation induced diabetic complications even when blood glucose is elevated. As per up to date literature, there are no reports available on the effects of COS leaves on the formation of early or late glycation products. The present study revealed the inhibitory effects of COS leaves on the α-glucosidase, fructosamine formation, protein glycation and glycation induced protein cross-linking.
Several investigations carried out using alloxan or streptozotocin induced diabetic rats have proven the hypoglycaemic effects of COS rhizome. Results of these studies show that COS rhizome increases the insulin secretion and peripheral utilization of glucose. Most of these studies also have shown cholesterol lowering effects of COS. The ethanol extract of COS rhizome showed a significant reduction in blood glucose, glycosylated haemoglobin and increase in liver glycogen and insulin in alloxan induced diabetic rats treated for 60 days (27). Furthermore improvements in many other biochemical parameters such as cholesterol lowering effects were observed in the test group [27]. These effects were comparable with those of hypoglycaemic drug glibenclamide. Ethanol extract of COS roots decreased blood glucose and increased the expression of insulin, insulin receptor, glucose transporter, glucokinase, aldolase, pyruvate kinase, succinate dehydrogenase and glycogen synthase in streptozotocin induced rats treated for 4 weeks [28]. Ethanol extract of COS root significantly reduced blood glucose concentration, increased glycogenesis and decreased gluconeogenesis in alloxan induced rats treated for 4 weeks [29]. Improvement of lipid parameters and hepatic antioxidant enzyme activities were also observed in their study [29]. Petroleum ether, chloroform, methanol and aqueous extracts of COS rhizome were studied in streptozotocin induced diabetic rats on the oral glucose tolerance after a single dose of extracts and the hypoglycaemic effects after multiple doses of extracts for 14 days [30]. Hypoglycaemic effects observed were highest with methanol and water extracts of COS which were in parallel with glibenclamide [30].
Diosgenin is the major constituent isolated from COS [38] and a quantity of 0.37 % was found in leaves [39]. Gavillán-Suárez et al. demonstrated the presence of high content of alkaloids in COS leaves [26]. Among the compounds isolated from COS and other species of genus Costus that have shown hypoglycaemic effects with a concomitant increase in insulin in diabetic rats include diosgenin [40], eremanthin [41], costunolide [42], quercetine glycosides [43] and the pentacyclic triterpene β-Amyrin [44]. Eremanthin isolated from COS rhizome has significantly reduced blood glucose level in a dose dependent manner and glycosylated hemoglobin HbA1c in streptozotocin induced diabetic rats treated for 60 days [41]. Eremanthin has also increased plasma insulin and tissue glycogen while showing hypolipidaemic effects [41]. Costunolide (20 mg/kg) isolated from COS root has significantly decreased glycosylated hemoglobin (HbA1c), total cholesterol, triglyceride, LDL cholesterol, markedly increased plasma insulin, tissue glycogen, HDL cholesterol and serum protein and restored the altered liver enzymes in plasma in streptozotocin induced diabetic rats treated for 30 days [42].
Among the few studies investigating antidiabetic effects of COS leaf, one study reported the effect of COS leaf methanol extract and water extract in reversing the insulin resistance induced by a high fat diet in male Wistar rats treated for 4 weeks [21]. Another study reported the glucose binding capacity and the reduction of glucose diffusion rate with COS leaf extracts in vitro [45]. They also have stated an amylase inhibitory effect of 18 % which was significantly lower than that of Acarbose with 2 % COS leaf using a slightly different method [45]. These findings suggested possible mechanisms of COS leaf extract in delaying the intestinal glucose absorption. A significant association of hypoglycaemia was revealed with the inclusion of COS leaves in the diet in diabetic patients who were on oral hypoglycaemic drugs [24].
Previous findings with COS rhizome showed multiple effects of the extract in the body which can bring down the blood glucose [27]. Effect of the COS on the intestinal absorption of glucose was not reported in these studies except for a very recent study which revealed a marginal amylase inhibitory effect [45]. Even though the amylase inhibitory effects seen in COS leaf is marginal in the current study in agreement with the previous study [45], the current study reveals a significant inhibitory effect of COS on glucosidase activity. Hence COS leaf is likely to blunt the post prandial blood spikes of blood glucose.
Antiglycation effects of C. speciosus leaves were not reported as per up to date literature. A few previous studies have shown a significant reduction in glycosylated haemoglobin with eremanthin isolated from COS rhizome for 60 days [41] and costunolide isolated from COS rhizome for 30 days [42] with a reduction in blood glucose. As the specific antiglycation mechanisms were not investigated in these studies, whether the decrease demonstrated in glycosylated haemoglobin was a direct effect of the extract or an indirect effect resulted due to lowering of blood glucose is not clear. It is known that antiglycation effects are correlated with antioxidant activity [46]. There is evidence for antioxidant activity of COS rhizome. Costunolide and eremanthin isolated from the root of COS demonstrated a significant increase in the activity of superoxide dismutase, catalase and glutathione peroxidase when streptozotocin induced diabetic rats were treated for 60 days [47]. Inhibitory effect of C. pictus leaves (another species of the genus Costus known as "insulin plant") on early glycation product fructosamine was assessed using the nitoblue tetrazolium reduction method. Results showed approximately 50 % inhibition of early glycation products by 100 μg/ml methanol extracts of C. pictus leaves which was in par with the standard inhibitor AG [48]. We have demonstrated in vitro protein glycation inhibitory effects of COS leaves for the first time, using three methods to monitor inhibition of early as well as late glycation products in the presence of high concentration of sugar. Effect of the COS extract on fructosamine formation at 250 μg/ml observed in the current study matches with the findings of 100 μg/ml C. pictus leaves [48]. Whether the inhibition observed in our study on the formation of late glycation products (protein cross-links) is due to the inhibition that occurred on early glycation events or due to inhibitory effects that occur on several stages is not identified. It is understood that a plant with glucose lowering effects will bring down the glycation as a result of the reduction of substrate concentration. However, the methods we adopted are designed to check the inhibitory effects on glycation at high concentration of sugar and therefore are likely to be independent from the hypoglycaemic effects of the extract.
Current study indicates possible mechanisms of COS leaf which may cause hypoglycaemic effects and effects which may delay the chronic diabetic complications in vivo. Even though the methods used are simple, they have been validated for accuracy and reproducibility. However, one limitation of the present study is that the difficulty of making a judgment on the in vivo efficacy purely based on the results of the findings made in vitro. Another limitation is that the safety of the use of COS extracts was not investigated. However, there are no reports on toxic effects of COS leaves and the documented evidence show the resistance to toxic effects caused by substances such as streptozotocin. It was revealed that 50, 100, 150 mg/kg COS leaf water extract cause strong inhibitory effects against the genotoxicity and histopathologic alterations induced by streptozotocin in rats [23]. Even the administration of higher doses (1500 to 3000 mg/kg) of COS leaf aqueous extract orally for 12 weeks did not show features of liver or renal toxicity in insulin resistant rats [22]. Similar evidence on the safety was obtained when the cell viability was measured in cell cultures with methanol extract of COS leaf [49] and ethyl acetate and water extracts of COS leaf [20]. According to the previous literature on the dosage of COS leaf used to lower blood glucose [26], an approximate daily dose of 57.45 mg (~0.82 mg/kg assuming a body weight of 70 kg) COS leaf methanol extract could be suggested to investigate the efficacy in humans. This dosage is far below the dosage used in experimental animals [22, 23].
The in vitro inhibitory effects of COS leaves on α-glucosidase activity was demonstrated for the first time which may be one mechanism of exerting hypoglycaemic effects of COS leaves in vivo. For the first time the current study reveals the inhibitory effects of COS leaves on the formation of early and late glycation products such as fructosamine and protein cross-links respectively in the presence of high concentration of fructose. These findings provide scientific evidence to support the use of COS leaves for hypoglycemic effects with an added advantage in slowing down protein glycation. Further studies are necessary to evaluate the possibility of using COS leaves as a safe alternative to synthetic antidiabetic drugs.
advance glycation end products
AG:
COS:
Costus speciosus
sodium dodecyl polyacrylamide gel electrophoresis
International Diabetes Federation. IDF Diabetes Atlas. 6th ed. Brussels, Belgium: International Diabetes Federation; 2013. http://www.idf.org/diabetesatlas.
Meeprom A, Sompong W, Chan CB, Adisakwattana S. Isoferulic acid, a new anti-glycation agent, inhibits fructose-and glucose-mediated protein glycation in vitro. Molecules. 2013;18(6):6439–54.
Sadowska-Bartosz I, Bartosz G. Prevention of protein glycation by natural compounds. Molecules. 2015;20(2):3309–34.
Aronson D. Cross-linking of glycated collagen in the pathogenesis of arterial and myocardial stiffening of aging and diabetes. J Hypertens. 2003;21(1):3–12.
Goh SY, Cooper ME. The role of advanced glycation end products in progression and complications of diabetes. J Clin Endocrinol Metab. 2008;93(4):1143–52.
Singh VP, Bali A, Singh N, Jaggi AS. Advanced glycation end products and diabetic complications. Korean J Physiol Pharmacol. 2014;18(1):1–14.
Article PubMed CAS PubMed Central Google Scholar
Gkogkolou P, Böhm M. Advanced glycation end products: key players in skin aging? Dermato-Endocrinology. 2012;4(3):259–70.
Li J, Liu D, Sun L, Lu Y, Zhang Z. Advanced glycation end products and neurodegenerative diseases: mechanisms and perspective. J Neurol Sci. 2012;317(1):1–5.
Sheard NF, Clark NG, Brand-Miller JC, Franz MJ, Pi-Sunyer FX, Mayer-Davis E, et al. Dietary carbohydrate (Amount and Type) in the prevention and management of diabetes a statement by the American diabetes association. Diabetes Care. 2004;27(9):2266–71.
Mahomoodally MF, Subratty AH, Gurib-Fakim A, Choudhary MI, Nahar Khan S. Traditional medicinal herbs and food plants have the potential to inhibit key carbohydrate hydrolyzing enzymes in vitro and reduce postprandial blood glucose peaks in vivo. The Scientific World J. 2012; doi:10.1100/2012/285284.
Olaokun OO, McGaw LJ, Eloff JN, Naidoo V. Evaluation of the inhibition of carbohydrate hydrolysing enzymes, antioxidant activity and polyphenolic content of extracts of ten African Ficus species (Moraceae) used traditionally to treat diabetes. BMC Complementary and Alternative Medicine. 2013;13(1):94–103.
Sales PM, Souza PM, Simeoni LA, Magalhães PO, Silveira D. α-Amylase inhibitors: a review of raw material and isolated compounds from plant source. J Pharm Pharm Sci. 2012;15(1):141–83.
Kumar S, Narwal S, Kumar V, Prakash O. α-glucosidase inhibitors from plants: a natural approach to treat diabetes. Pharmacogn Rev. 2011;5(9):19–29.
Grover JK, Yadav S, Vitas V. Medicinal plants of India with antidiabetic potential. J Ethnopharmacol. 2002;81(1):81–100.
Ediriweera ERHSS, Ratnasooriya WD. A review on herbs used in treatment of diabetes mellitus by Sri Lankan ayurvedic and traditional physicians. Ayu. 2009;30(4):373–91.
Jung M, Park M, Lee CH, Kang Y, Kang ES, Kim SK. Antidiabetic agents from medicinal plants. Curr Med Chem. 2006;13:1203–18.
Modak M, Dixit P, Londhe J, Ghaskadbi S, Devasagayam TPA. Indian herbs and herbal drugs used for the treatment of diabetes. J Clin Biochem Nutr. 2007;40(3):163–73.
Rani AS, Sulakshana G, Patnaik S. Costus speciosus, An antidiabetic plant-review. Fons Scientia Journal of Pharmacy Research. 2012;1(3):52–3.
Pawar VA, Pawar PR. Costus speciosus: an important medicinal plant. International Journal of Science and Research. 2014;3(7):28–33.
Samarakoon KW, Lakmal HC, Kim SY, Jeon YJ. Electron spin resonance spectroscopic measurement of antioxidant activity of organic solvent extracts derived from the methanolic extracts of Sri Lankan thebu leaves (Costus speciosus). Journal of the National Science Foundation of Sri Lanka. 2014;42(3):209–16.
Subasinghe HWAS, Hettihewa LM, Gunawardena S, Liyanage T. Methanol and water extracts of Costus speciosus (j.könig) sm. leaves reverse the high-fat-diet induced peripheral insulin resistance in experimental Wistar rats. International Research Journal of Pharmacy. 2014;5(2):44–9.
Subasinghe HWAS, Hettihewa LM, Gunawardena S, Liyanage T. Evaluation of aqueous extract of Costus speciosus(J.König)Sm.leaf for hepatic and renal toxicities: biochemical and histopathological perspectives. European Journal of Pharmaceutical and Medical Research. 2015;2(4):1–12.
Girgis SM, Shoman TMT, Kassem SM, Ezz El-Din A, Abdel-Aziz KB. Potential Protective effect of Costus speciosus or its nanoparticles on streptozotocin-induced genotoxicity and histopathological alterations in rats. Journal of Nutrition & Food Sciences. 2015;S3:002. doi:10.4172/2155-9600.1000S3002.
Medagama AB, Bandara R, Abeysekera RA, Imbulpitiya B, Pushpakumari T. Use of complementary and alternative medicines (CAMs) among type 2 diabetes patients in Sri Lanka: a cross sectional survey. BMC Complementary and Alternative Medicine. 2014;14(1):374. doi:10.1186/1472-6882-14-374.
Vishalakshi DD, Asna U. Nutrient profile and antioxidant components of Costus Speciosus Sm, and Costus ignes Nak. Indian Journal of Natural Products and Resources. 2010;1:116–8.
Gavillán-Suárez J, Aguilar-Perez A, Rivera-Ortiz N, Rodríguez-Tirado K, Figueroa-Cuilan W, Morales-Santiago L, et al. Chemical profile and in vivo ypoglycemic effects of Syzygium jambos, Costus speciosus and Tapeinochilos ananassae plant extracts used as diabetes adjuvants in Puerto Rico. BMC Complementary and Alternative Medicine. 2015;15:244. doi:10.1186/s12906-015-0772-7.
Revathy J, Abdullah SS, Kumar PS. Antidiabetic effect of Costus Speciosus rhizome extract in alloxan induced albino rats. Journal of Chemistry and Biochemistry. 2014;2(1):13–22.
Ali HA, Almaghrabi OA, Afifi ME. Molecular mechanisms of anti-hyperglycemic effects of Costus speciosus extract in streptozotocin-induced diabetic rats. Saudi Medical Journal. 2014;35(12):1501–6.
Bavarva JH, Narasimhacharya AVRL. Antihyperglycemic and hypolipidemic effects of Costus speciosus in alloxan induced diabetic rats. Phytother Res. 2008;22(5):620–6.
Rajesh MS, Harish MS, Sathyaprakash RJ, Shetty AR, Shivananda TN. "Antihyperglycemic activity of the various extracts of Costus speciosus rhizomes". Jof Natural Remedies. 2009;9(2):235–41.
Poongunran J, Perera HKI, Fernando WIT, Jayasinghe L, Sivakanesan R. α-Glucosidase and α-amylase inhibitory activities of nine Sri Lankan antidiabetic plants. British J Pharmaceutical Res. 2015;7(5):365–74.
Geethalakshmi R, Sarada DVL, Marimuthu P, Ramasamy K. α-Amylase inhibitory activity of Trianthema decandra L. Int J Biotechnol Biochemistry. 2010;6(3):369–76.
Bernfeld P. Amylases alpha and beta, in Methods in enzymlogy, Volume 1 (Academic Press, New York). Methods Enzymol. 1955;1:149–58.
Elya B, Basah K, Munim A, Yuliastuti W, Bangun A, Septiana EK. Screening of α-glucosidase inhibitory activity from some plants of Apocynaceae, Clusiaceae, Euphorbiaceae, and Rubiaceae. Journal of Biomedicine and Biotechnology. 2012; doi:10.1155/2012/281078.
Wijetunge DCR, Perera HKI. A novel in vitro method to detect inhibitors of protein glycation. Asian Journal of Medical Sciences. 2014;5(3):15–21.
Laemmli UK. Cleavage of structural proteins during the assembly of the head of bacteriophage T4. Nature. 1970;227(5259):680–5.
Perera HKI, Ranasinghe HASK. A simple method to detect plant based inhibitors of glycation induced protein cross-linking. Asian Journal of Medical Sciences. 2015;6(1):28–33.
Dasgupta B, Pandey VB. A new Indian source of diosgenin (Costus speciosus). Experientia. 1970;26(5):475–6.
Srivastava S, Singh P, Mishra G, Jha KK, Khosa RL. Costus speciosus (Keukand): a review. Der Pharmacia Sinica. 2011;2(1):118–28.
Naidu PB, Ponmurugan P, Begum MS, Mohan K, Meriga B, RavindarNaik R, et al. Diosgenin reorganises hyperglycaemia and distorted tissue lipid profile in high‐fat diet-streptozotocin‐induced diabetic rats. J Sci Food Agric. 2015;95(15):3177–82.
Eliza J, Daisy P, Ignacimuthu S, Duraipandiyan V. Antidiabetic and antilipidemic effect of eremanthin from Costus speciosus (Koen.) Sm., in STZ-induced diabetic rats. Chem Biol Interact. 2009;182(1):67–72.
Eliza J, Daisy P, Ignacimuthu S, Duraipandiyan V. Normoglycemic and hypolipidemic effect of costunolide isolated from Costus speciosus (Koen ex. Retz.) Sm. in streptozotocin-induced diabetic rats. Chem Biol Interact. 2009;79(2):329–34.
Mosihuzzaman M, Nahar N, Ali L, Rokeya B, Khan AK, Nur EAM, et al. Hypoglycemic effects of three plants from eastern Himalayan belt. Diabetes Research. 1994;26(3):127–38.
Jothivel NPS, Appachi M, Singaravel S, Rasilingam D, Deivasigamani K, Thangavel S. Anti-diabetic activity of methanol leaf extract of Costus pictus D. Don in alloxan-induced diabetic rats. Journal of Health Sciences. 2007;53(6):655–63.
Devi VD, Asna U. Possible Hypoglycemic Attributes of Morus indica 1. and Costus speciosus: An in vitro Study. Malaysian Journal of Nutrition. 2015;21(1):83–91.
Dearlove RP, Greenspan P, Hartle DK, Swanson RB, Hargrove JL. Inhibition of protein glycation by extracts of culinary herbs and spices. J Med Food. 2008;11(2):275–81.
Eliza J, Daisy P, Ignacimuthu S. Antioxidant activity of costunolide and eremanthin isolated from Costus speciosus (Koen ex. Retz) Sm. Chem Biol Interact. 2010;188(3):467–72.
Majumdar M, Parihar PS. Antibacterial, anti-oxidant and antiglycatbion potential of Costus pictus from southern region, India. Asian J Plant Sci Res. 2012;2(2):95–101.
Nair SV, Hettihewa M, Rupasinghe HP. Apoptotic and inhibitory effects on cell proliferation of hepatocellular carcinoma HepG2 cells by methanol leaf extract of Costus speciosus. BioMed Research International. 2014; doi:10.1155/2014/637098.
University of Peradeniya research grant RG/AF/2013/33/M for financial assistance, Prof. R. Sivakanesan for revising the manuscript, Mr. A.M.P.S.T.M. Bandara for assistance with electrophoresis, Mr. G. Gunasekera for photography, Ms. S.L. De Silva for collection of plant material and Deputy Director, National Herbarium, Peradeniya for identification and authentication of plants.
Department of Biochemistry, Faculty of Medicine, University of Peradeniya, Peradeniya, Sri Lanka
Handunge Kumudu Irani Perera, Walgama Kankanamlage Vindhya Kalpani Premadasa & Jeyakumaran Poongunran
Postgraduate Institute of Science, University of Peradeniya, Peradeniya, Sri Lanka
Walgama Kankanamlage Vindhya Kalpani Premadasa & Jeyakumaran Poongunran
Handunge Kumudu Irani Perera
Walgama Kankanamlage Vindhya Kalpani Premadasa
Jeyakumaran Poongunran
Correspondence to Handunge Kumudu Irani Perera.
HKIP involved in conception and design, obtaining grants, supervision and overall coordination of the project, acquisition of data, analysis and interpretation of data, preparation of the manuscript and critical revision of the manuscript, WKVKP carried out glycation experiments. JP carried out enzyme inhibitory assays. All authors read and approved the final manuscript.
HKIP [B.V.Sc., M.Phil., Ph.D. (Glasgow)], Senior Lecturer in Biochemistry, WKVKP (B.A.M.S.), M.Sc. student in Clinical Biochemistry, JP (B.Sc.), M.Sc. student in Clinical Biochemistry.
Perera, H.K.I., Premadasa, W.K.V.K. & Poongunran, J. α-glucosidase and glycation inhibitory effects of costus speciosus leaves. BMC Complement Altern Med 16, 2 (2015). https://doi.org/10.1186/s12906-015-0982-z
DOI: https://doi.org/10.1186/s12906-015-0982-z
Costus speciosus leaf
α-amylase
α-glucosidase
Glycation
|
CommonCrawl
|
Theoretical Atlas
He had bought a large map representing the sea, / Without the least vestige of land: / And the crew were much pleased when they found it to be / A map they could all understand.
Higher Gauge Theory in Edinburgh – Part I
Posted by Jeffrey Morton under 2-groups, categorification, conferences, double categories, geometry, groupoids, higher dimensional algebra, moduli spaces, string theory, tqft
The main thing happening in my end of the world is that it's relocated from Europe back to North America. I'm taking up a teaching postdoc position in the Mathematics and Computer Science department at Mount Allison University starting this month. However, amidst all the preparations and moving, I was also recently in Edinburgh, Scotland for a workshop on Higher Gauge Theory and Higher Quantization, where I gave a talk called 2-Group Symmetries on Moduli Spaces in Higher Gauge Theory. That's what I'd like to write about this time.
Edinburgh is a beautiful city, though since the workshop was held at Heriot-Watt University, whose campus is outside the city itself, I only got to see it on the Saturday after the workshop ended. However, John Huerta and I spent a while walking around, and as it turned out, climbing a lot: first the Scott Monument, from which I took this photo down Princes Street:
And then up a rather large hill called Arthur's Seat, in Holyrood Park next to the Scottish Parliament.
The workshop itself had an interesting mix of participants. Urs Schreiber gave the most mathematically sophisticated talk, and mine was also quite category-theory-minded. But there were also some fairly physics-minded talks that are interesting to me as well because they show the source of these ideas. In this first post, I'll begin with my own, and continue with David Roberts' talk on constructing an explicit string bundle. …
2-Group Symmetries of Moduli Spaces
My own talk, based on work with Roger Picken, boils down to a couple of observations about the notion of symmetry, and applies them to a discrete model in higher gauge theory. It's the kind of model you might use if you wanted to do lattice gauge theory for a BF theory, or some other higher gauge theory. But the discretization is just a convenience to avoid having to deal with infinite dimensional spaces and other issues that don't really bear on the central point.
Part of that point was described in a previous post: it has to do with finding a higher analog for the relationship between two views of symmetry: one is "global" (I found the physics-inclined part of the audience preferred "rigid"), to do with a group action on the entire space; the other is "local", having to do with treating the points of the space as objects of a groupoid who show how points are related to each other. (Think of trying to describe the orbit structure of just the part of a group action that relates points in a little neighborhood on a manifold, say.)
In particular, we're interested in the symmetries of the moduli space of connections (or, depending on the context, flat connections) on a space, so the symmetries are gauge transformations. Now, here already some of the physically-inclined audience objected that these symmetries should just be eliminated by taking the quotient space of the group action. This is based on the slogan that "only gauge-invariant quantities matter". But this slogan has some caveats: in only applies to closed manifolds, for one. When there are boundaries, it isn't true, and to describe the boundary we need something which acts as a representation of the symmetries. Urs Schreiber pointed out a well-known example: the Chern-Simons action, a functional on a certain space of connections, is not gauge-invariant. Indeed, the boundary terms that show up due to this not-invariance explain why there is a Wess-Zumino-Witt theory associated with the boundaries when the bulk is described by Chern-Simons.
Now, I've described a lot of the idea of this talk in the previous post linked above, but what's new has to do with how this applies to moduli spaces that appear in higher gauge theory based on a 2-group . The points in these space are connections on a manifold . In particular, since a 2-group is a group object in categories, the transformation groupoid (which captures global symmetries of the moduli space) will be a double category. It turns out there is another way of seeing this double category by local descriptions of the gauge transformations.
In particular, general gauge transformations in HGT are combinations of two special types, described geometrically by -valued functions, or -valued 1-forms, where is the group of objects of , and is the group of morphisms based at . If we think of connections as functors from the fundamental 2-groupoid into , these correspond to pseudonatural transformations between these functors. The main point is that there are also two special types of these, called "strict", and "costrict". The strict ones are just natural transformations, where the naturality square commutes strictly. The costrict ones, also called ICONs (for "identity component oplax natural transformations" – see the paper by Steve Lack linked from the nlab page above for an explanation of "costrictness"). They assign the identity morphism to each object, but the naturality square commutes only up to a specified 2-cell. Any pseudonatural transformation factors into a strict and costrict part.
The point is that taking these two types of transformation to be the horizontal and vertical morphisms of a double category, we get something that very naturally arises by the action of a big 2-group of symmetries on a category. We also find something which doesn't happen in ordinary gauge theory: that only the strict gauge transformations arise from this global symmetry. The costrict ones must already be the morphisms in the category being acted on. This category plays the role of the moduli space in the normal 1-group situation. So moving to 2-groups reveals that in general we should distinguish between global/rigid symmetries of the moduli space, which are strict gauge transformations, and costrict ones, which do not arise from the global 2-group action and should be thought of as intrinsic to the moduli space.
String Bundles
David Roberts gave a rather interesting talk called "Constructing Explicit String Bundles". There are some notes for this talk here. The point is simply to give an explicit construction of a particular 2-group bundle. There is a lot of general abstract theory about 2-bundles around, and a fair amount of work that manipulates physically-motivated descriptions of things that can presumably be modelled with 2-bundles. There has been less work on giving a mathematically rigorous description of specific, concrete 2-bundles.
This one is of interest because it's based on the String 2-group. Details are behind that link, but roughly the classifying space of (a homotopy 2-type) is fibred over the classifying space for (a 1-type). The exact map is determined by taking a pullback along a certain characteristic class (which is a map out of ). Saying "the" string 2-group is a bit of a misnomer, by the way, since such a 2-group exists for every simply connected compact Lie group . The group that's involved here is a , the string 2-group associated to , the universal cover of the rotation group . This is the one that determines whether a given manifold can support a "string structure". A string structure on , therefore, is a lift of a spin structure, which determines whether one can have a spin bundle over , hence consistently talk about a spin connection which gives parallel transport for spinor fields on . The string structure determines if one can consistently talk about a string-bundle over , and hence a 2-group connection giving parallel transport for strings.
In this particular example, the idea was to find, explicitly, a string bundle over Minkowski space – or its conformal compactification. In point of fact, this particular one is for $latek String(5)$, and is over 6-dimensional Minkowski space, whose compactification is . This particular is convenient because it's possible to show abstractly that it has exactly one nontrivial class of string bundles, so exhibiting one gives a complete classification. The details of the construction are in the notes linked above. The technical details rely on the fact that we can coordinatize nicely using the projective quaternionic plane, but conceptually it relies on the fact that , and because of how the lifting works, this is also . This quotient means there's a string bundle whose fibre is .
While this is only one string bundle, and not a particularly general situation, it's nice to see that there's a nice elegant presentation which gives such a bundle explicitly (by constructing cocycles valued in the crossed module associated to the string 2-group, which give its transition functions).
(Here endeth Part I of this discussion of the workshop in Edinburgh. Part II will talk about Urs Schreiber's very nice introduction to Higher Geometric Quantization)
(This ends the first part of this update – the next will describe the physics-oriented talks, and the third will describe Urs Schreiber's series on higher geometric quantization)
Lecture Series on Sheaves and Motives – Part III
Posted by Jeffrey Morton under category theory, cohomology, geometry, homotopy theory, localization, motives, simplicial sets, spans
This is the 100th entry on this blog! It's taken a while, but we've arrived at a meaningless but convenient milestone. This post constitutes Part III of the posts on the topics course which I shared with Susama Agarwala. In the first, I summarized the core idea in the series of lectures I did, which introduced toposes and sheaves, and explained how, at least for appropriate sites, sheaves can be thought of as generalized spaces. In the second, I described the guest lecture by John Huerta which described how supermanifolds can be seen as an example of that notion.
In this post, I'll describe the machinery I set up as part of the context for Susama's talks. The connections are a bit tangential, but it gives some helpful context for what's to come. Namely, my last couple of lectures were on sheaves with structure, and derived categories. In algebraic geometry and elsewhere, derived categories are a common tool for studying spaces. They have a cohomological flavour, because they involve sheaves of complexes (or complexes of sheaves) of abelian groups. Having talked about the background of sheaves in Part I, let's consider how these categories arise.
Structured Sheaves and Internal Constructions in Toposes
The definition of a (pre)sheaf as a functor valued in is the basic one, but there are parallel notions for presheaves valued in categories other than – for instance, in Abelian groups, rings, simplicial sets, complexes etc. Abelian groups are particularly important for geometry/cohomology.
But for the most part, as long as the target category can be defined in terms of sets and structure maps (such as the multiplication map for groups, face maps for simplicial sets, or boundary maps in complexes), we can just think of these in terms of objects "internal to a category of sheaves". That is, we have a definition of "abelian group object" in any reasonably nice category – in particular, any topos. Then the category of "abelian group objects in " is equivalent to a category of "abelian-group-valued sheaves on ", denoted . (As usual, I'll omit the Grothendieck topology in the notation from now on, though it's important that it is still there.)
Sheaves of abelian groups are supposed to generalize the prototypical example, namely sheaves of functions valued in abelian groups, (indeed, rings) such as , , or .
To begin with, we look at the category , which amounts to the same as the category of abelian group objects in . This inherits several properties from itself. In particular, it's an abelian category: this gives us that there is a direct sum for objects, a zero object, exact sequences split, all morphisms have kernels and cokernels, and so forth. These useful properties all hold because at each , the direct sum of sheaves of abelian group just gives , and all the properties hold locally at each .
So, sheaves of abelian groups can be seen as abelian groups in a topos of sheaves . In the same way, other kinds of structures can be built up inside the topos of sheaves, and there are corresponding "external" point of view. One good example would be simplicial objects: one can talk about the simplicial objects in , or sheaves of simplicial sets, . (Though it's worth noting that since simplicial sets model infinity-groupoids, there are more sophisticated forms of the sheaf condition which can be applied here. But for now, this isn't what we need.)
Recall that simplicial objects in a category are functors – that is, -valued presheaves on , the simplex category. This has nonnegative integers as its objects, and the morphisms from to are the order-preserving functions from to . If , we get "simplicial sets", where is the "set of -dimensional simplices". The various morphisms in turn into (composites of) the face and degeneracy maps. Simplicial sets are useful because they are a good model for "spaces".
Just as with abelian groups, simplicial objects in can also be seen as sheaves on valued in the category of simplicial sets, i.e. objects of . These things are called, naturally, "simplicial sheaves", and there is a rather extensive body of work on them. (See, for instance, the canonical book by Goerss and Jardine.)
This correspondence is just because there is a fairly obvious bunch of isomorphisms turning functors with two inputs into functors with one input returning another functor with one input:
(These are all presheaf categories – if we put a trivial topology on , we can refine this to consider only those functors which are sheaves in every position, where we use a certain product topology on .)
Another relevant example would be complexes. This word is a bit overloaded, but here I'm referring to the sort of complexes appearing in cohomology, such as the de Rahm complex, where the terms of the complex are the sheaves of differential forms on a space, linked by the exterior derivative. A complex is a sequence of Abelian groups with boundary maps (or just for short), like so:
with the property that . Morphisms between these are sequences of morphisms between the terms of the complexes where each which commute with all the boundary maps. These all assemble into a category of complexes . We also have and , the (full) subcategories of complexes where all the negative (respectively, positive) terms are trivial.
One can generalize this to replace by any category enriched in abelian groups, which we need to make sense of the requirement that a morphism is zero. In particular, one can generalize it to sheaves of abelian groups. This is an example where the above discussion about internalization can be extended to more than one structure at a time: "sheaves-of-(complexes-of-abelian-groups)" is equivalent to "complexes-of-(sheaves-of-abelian-groups)".
This brings us to the next point, which is that, within , the last two examples, simplicial objects and complexes, are secretly the same thing.
Dold-Puppe Correspondence
The fact I just alluded to is a special case of the Dold-Puppe correspondence, which says:
Theorem: In any abelian category , the category of simplicial objects is equivalent to the category of positive chain complexes .
The better-known name "Dold-Kan Theorem" refers to the case where . If is a category of -valued sheaves, the Dold-Puppe correspondence amounts to using Dold-Kan at each .
The point is that complexes have only coboundary maps, rather than a plethora of many different face and boundary maps, so we gain some convenience when we're looking at, for instance, abelian groups in our category of spaces, by passing to this equivalent description.
The correspondence works by way of two maps (for more details, see the book by Goerss and Jardine linked above, or see the summary here). The easy direction is the Moore complex functor, . On objects, it gives the intersection of all the kernels of the face maps:
The boundary map from this is then just . This ends up satisfying the "boundary-squared is zero" condition because of the identities for the face maps.
The other direction is a little more complicated, so for current purposes, I'll leave you to follow the references above, except to say that the functor from complexes to simplicial objects in is defined so as to be adjoint to . Indeed, and together form an adjoint equivalence of the categories.
Chain Homotopies and Quasi-Isomorphisms
One source of complexes in mathematics is in cohomology theories. So, for example, there is de Rahm cohomology, where one starts with the complex with the space of smooth differential -forms on some smooth manifold , with the exterior derivatives as the coboundary maps. But no matter which complex you start with, there is a sequence of cohomology groups, because we have a sequence of cohomology functors:
given by the quotients
That is, it's the cocycles (things whose coboundary is zero), up to equivalence where cocycles are considered equivalent if their difference is a coboundary (i.e. something which is itself the coboundary of something else). In fact, these assemble into a functor , since there are natural transformations between these functors
which just come from the restrictions of the to the kernel . (In fact, this makes the maps trivial – but the main point is that this restriction is well-defined on equivalence classes, and so we get an actual complex again.) The fact that we get a functor means that any chain map gives a corresponding .
Now, the original motivation of cohomology for a space, like the de Rahm cohomology of a manifold , is to measure something about the topology of . If is trivial (say, a contractible space), then its cohomology groups are all trivial. In the general setting, we say that is acyclic if all the . But of course, this doesn't mean that the chain itself is zero.
More generally, just because two complexes have isomorphic cohomology, doesn't mean they are themselves isomorphic, but we say that is a quasi-isomorphism if is an isomorphism. The idea is that, as far as we can tell from the information that coholomology detects, it might as well be an isomorphism.
Now, for spaces, as represented by simplicial sets, we have a similar notion: a map between spaces is a quasi-isomorphism if it induces an isomorphism on cohomology. Then the key thing is the Whitehead Theorem (viz), which in this language says:
Theorem: If is a quasi-isomorphism, it is a homotopy equivalence.
That is, it has a homotopy inverse , which means there is a homotopy .
What about for complexes? We said that in an abelian category, simplicial objects and complexes are equivalent constructions by the Dold-Puppe correspondence. However, the question of what is homotopy equivalent to what is a bit more complicated in the world of complexes. The convenience we gain when passing from simplicial objects to the simpler structure of complexes must be paid for it with a little extra complexity in describing what corresponds to homotopy equivalences.
The usual notion of a chain homotopy between two maps is a collection of maps which shift degrees, , such that . That is, the coboundary of is the difference between and . (The "co" version of the usual intuition of a homotopy, whose ingoing and outgoing boundaries are the things which are supposed to be homotopic).
The Whitehead theorem doesn't work for chain complexes: the usual "naive" notion of chain homotopy isn't quite good enough to correspond to the notion of homotopy in spaces. (There is some discussion of this in the nLab article on the subject. That is the reason for…
Derived Categories
Taking "derived categories" for some abelian category can be thought of as analogous, for complexes, to finding the homotopy category for simplicial objects. It compensates for the fact that taking a quotient by chain homotopy doesn't give the same "homotopy classes" of maps of complexes as the corresponding operation over in spaces.
That is, simplicial sets, as a model category, know everything about the homotopy type of spaces: so taking simplicial objects in is like internalizing the homotopy theory of spaces in a category . So, if what we're interested in are the homotopical properties of spaces described as simplicial sets, we want to "mod out" by homotopy equivalences. However, we have two notions which are easy to describe in the world of complexes, which between them capture the notion "homotopy" in simplicial sets. There are chain homotopies and quasi-isomorphisms. So, naturally, we mod out by both notions.
So, suppose we have an abelian category . In the background, keep in mind the typical example where , and even where for some reasonably nice space , if it helps to picture things. Then the derived category of is built up in a few steps:
Take the category of complexes. (This stands in for "spaces in " as above, although we've dropped the " ", so the correct analogy is really with spectra. This is a bit too far afield to get into here, though, so for now let's just ignore it.)
Take morphisms only up to homotopy equivalence. That is, define the equivalence relation with whenever there is a homotopy with . Then is the quotient by this relation.
Localize at quasi-isomorphisms. That is, formally throw in inverses for all quasi-isomorphisms , to turn them into actual isomorphisms. The result is .
(Since we have direct sums of complexes (componentwise), it's also possible to think of the last step as defining , where is the category of acyclic complexes – the ones whose cohomology complexes are zero.)
Explicitly, the morphisms of can be thought of as "zig-zags" in ,
where all the left-pointing arrows are quasi-isomorphisms. (The left-pointing arrows are standing in for their new inverses in , pointing right.) This relates to the notion of a category of spans: in a reasonably nice category, we can always compose these zig-zags to get one of length two, with one leftward and one rightward arrow. In general, though, this might not happen.
Now, the point here is that this is a way of extracting "homotopical" or "cohomological" information about , and hence about if or something similar. In the next post, I'll talk about Susama's series of lectures, on the subject of motives. This uses some of the same technology described above, in the specific context of schemes (which introduces some extra considerations specific to that world). It's aim is to produce a category (and a functor into it) which captures all the cohomological information about spaces – in some sense a universal cohomology theory from which any other can be found.
Talk by John Huerta – The Functor of Points approach to Supermanifolds
Posted by Jeffrey Morton under sheaves, Supergeometry, toposes
John Huerta visited here for about a week earlier this month, and gave a couple of talks. The one I want to write about here was a guest lecture in the topics course Susama Agarwala and I were teaching this past semester. The course was about topics in category theory of interest to geometry, and in the case of this lecture, "geometry" means supergeometry. It follows the approach I mentioned in the previous post about looking at sheaves as a kind of generalized space. The talk was an introduction to a program of seeing supermanifolds as a kind of sheaf on the site of "super-points". This approach was first proposed by Albert Schwartz, though see, for instance, this review by Christophe Sachse for more about this approach, and this paper (comparing the situation for real and complex (super)manifolds) for more recent work.
It's amazing how many geometrical techniques can be applied in quite general algebras once they're formulated correctly. It's perhaps less amazing for supermanifolds, in which commutativity fails in about the mildest possible way. Essentially, the algebras in question split into bosonic and fermionic parts. Everything in the bosonic part commutes with everything, and the fermionic part commutes "up to a negative sign" within itself.
Supermanifolds
Supermanifolds are geometric objects, which were introduced as a setting on which "supersymmetric" quantum field theories could be defined. Whether or not "real" physics has this symmetry (the evidence is still pending, though ), these are quite nicely behaved theories. (Throwing in extra symmetry assumptions tends to make things nicer, and supersymmetry is in some sense the maximum extra symmetry we might reasonably hope for in a QFT).
Roughly, the idea is that supermanifolds are spaces like manifolds, but with some non-commuting coordinates. Supermanifolds are therefore in some sense "noncommutative spaces". Noncommutative algebraic or differential geometry start with various dualities to the effect that some category of spaces is equivalent to the opposite of a corresponding category of algebras – for instance, a manifold corresponds to the algebra . So a generalized category of "spaces" can be found by dropping the "commutative" requirement from that statement. The category of supermanifolds only weakens the condition slightly: the algebras are -graded, and are "supercommutative", i.e. commute up to a sign which depends on the grading.
Now, the conventional definition of supermanifolds, as with schemes, is to say that they are spaces equipped with a "structure sheaf" which defines an appropriate class of functions. For ordinary (real) manifolds, this would be the sheaf assigning to an open set the ring of all the smooth real-valued functions. The existence of an atlas of charts for the manifold amounts to saying that the structure sheaf locally looks like for some open set . (For fixed dimension ).
For supermanifolds, the condition on the local rings says that, for fixed dimension , a -dimensional supermanifold has structure sheaf in which $they look like
In this, is as above, and the notation
refers to the exterior algebra, which we can think of as polynomials in the , with the wedge product, which satisfies . The idea is that one is supposed to think of this as the algebra of smooth functions on a space with ordinary dimensions, and "anti-commuting" dimensions with coordinates . The commuting variables, say , are called "bosonic" or "even", and the anticommuting ones are "fermionic" or "odd". (The term "fermionic" is related to the fact that, in quantum mechanics, when building a Hilbert space for a bunch of identical fermions, one takes the antisymmetric part of the tensor product of their individual Hilbert spaces, so that, for instance, ).
The structure sheaf picture can therefore be thought of as giving an atlas of charts, so that the neighborhoods locally look like "super-domains", the super-geometry equivalent of open sets .
In fact, there's a long-known theorem of Batchelor which says that any real supermanifold is given exactly by the algebra of "global sections", which looks like . That is, sections in the local rings ("functions on" open neighborhoods of ) always glue together to give a section in .
Another way to put this is that every supermanifold can be seen as just bundle of exterior algebras. That is, a bundle over a base manifold , whose fibres are the "super-points" corresponding to . The base space is called the "reduced" manifold. Any such bundle gives back a supermanifold, where the algebras in the structure sheaf are the algebras of sections of the bundle.
One shouldn't be too complacent about saying they are exactly the same, though: this correspondence isn't functorial. That is, the maps between supermanifolds are not just bundle maps. (Also, Batchelor's theorem works only for real, not for complex, supermanifolds, where only the local neighborhoods necessarily look like such bundles).
Why, by the way, say that is a super "point", when is a whole vector space? Since the fermionic variables are anticommuting, no term can have more than one of each , so this is a finite-dimensional algebra. This is unlike , which suggests that the noncommutative directions are quite different. Any element of is nilpotent, so if we think of a Taylor series for some function – a power series in the – we see note that no term has a coefficient for greater than 1, or of degree higher than in all the – so imagines that only infinitesimal behaviour in these directions exists at all. Thus, a supermanifold is like an ordinary -dimensional manifold , built from the ordinary domains , equipped with a bundle whose fibres are a sort of "infinitesimal fuzz" about each point of the "even part" of the supermanifold, described by the .
But this intuition is a bit vague. We can sharpen it a bit using the functor of points approach…
Supermanifolds as Manifold-Valued Sheaves
As with schemes, there is also a point of view that sees supermanifolds as "ordinary" manifolds, constructed in the topos of sheaves over a certain site. The basic insight behind the picture of these spaces, as in the previous post, is based on the fact that the Yoneda lemma lets us think of sheaves as describing all the "probes" of a generalized space (actually an algebra in this case). The "probes" are the objects of a certain category, and are called "superpoints".
This category is just , the opposite of the category of Grassman algebras (i.e. exterior algebras) – that is, polynomial algebras in noncommuting variables, like . These objects naturally come with a -grading, which are spanned, respectively, by the monomials with even and odd degree: latex \mathbf{SMan}$ (\Lambda_q)_0 \oplus (\Lambda_q)_1$
This is a -grading since the even ones commute with anything, and the odd ones anti-commute with each other. So if and are homogeneous (live entirely in one grade or the other), then .
The should be thought of as the -dimensional supermanifold: it looks like a point, with a -dimensional fermionic tangent space (the "infinitesimal fuzz" noted above) attached. The morphisms in from to $llatex \Lambda_r$ are just the grade-preserving algebra homomorphisms from to . There are quite a few of these: these objects are not terminal objects like the actual point. But this makes them good probes. Thi gets to be a site with the trivial topology, so that all presheaves are sheaves.
Then, as usual, a presheaf on this category is to be understood as giving, for each object , the collection of maps from to a space . The case gives the set of points of , and the various other algebras give sets of " -points". This term is based on the analogy that a point of a topological space (or indeed element of a set) is just the same as a map from the terminal object , the one point space (or one element set). Then an " -point" of a space is just a map from another object . If is not terminal, this is close to the notion of a "subspace" (though a subspace, strictly, would be a monomorphism from ). These are maps from in , or as algebra maps, consists of all the maps .
What's more, since this is a functor, we have to have a system of maps between the . For any algebra maps , we should get corresponding maps . These are really algebra maps , of which there are plenty, all determined by the images of the generators .
Now, really, a sheaf on is actually just what we might call a "super-set", with sets for each . To make super-manifolds, one wants to say they are "manifold-valued sheaves". Since manifolds themselves don't form a topos, one needs to be a bit careful about defining the extra structure which makes a set a manifold.
Thus, a supermanifold is a manifold constructed in the topos . That is, must also be equipped with a topology and a collection of charts defining the manifold structure. These are all construed internally using objects and morphisms in the category of sheaves, where charts are based on super-domains, namely those algebras which look like , for an open subset of .
The reduced manifold which appears in Batchelor's theorem is the manifold of ordinary points . That is, it is all the -points, where is playing the role of functions on the zero-dimensional domain with just one point. All the extra structure in an atlas of charts for all of to make it a supermanifold amounts to putting the structure of ordinary manifolds on the – but in compatible ways.
(Alternatively, we could have described as sheaves in , where is a site of "superdomains", and put all the structure defining a manifold into . But working over super-points is preferable for the moment, since it makes it clear that manifolds and supermanifolds are just manifestations of the same basic definition, but realized in two different toposes.)
The fact that the manifold structure on the must be put on them compatibly means there is a relatively nice way to picture all these spaces.
Values of the Functor of Points as Bundles
The main idea which I find helps to understand the functor of points is that, for every superpoint (i.e. for every Grassman algebra ), one gets a manifold . (Note the convention that is the odd dimension of , and is the odd dimension of the probe superpoint).
Just as every supermanifold is a bundle of superpoints, every manifold is a perfectly conventional vector bundle over the conventional manifold of ordinary points. So for each , we get a bundle, .
Now this manifold, , consists exactly of all the "points" of – this tells us immediately that is not a category of concrete sheaves (in the sense I explained in the previous post). Put another way, it's not a concrete category – that would mean that there is an underlying set functor, which gives a set for each object, and that morphisms are determined by what they do to underlying sets. Non-concrete categories are, by nature, trickier to understand.
However, the functor of points gives a way to turn the non-concrete into a tower of concrete manifolds , and the morphisms between various amount to compatible towers of maps between the various for each . The fact that the compatibility is controlled by algebra maps explains why this is the same as maps between these bundles of superpoints.
Specifically, then, we have
This splits into maps of the even parts, and of the odd parts, where the grassman algebra has even and odd parts: , as above. Similarly, splits into odd and even parts, and since the functions on are entirely even, this is:
Now, the duality of "hom" and tensor means that , and algebra maps preserve the grading. So we just have tensor products of these with the even and odd parts, respectively, of the probe superpoint. Since the even part includes the multiples of the constants, part of this just gives a copy of itself. The remaining part of is nilpotent (since it's made of even-degree polynomials in the nilpotent , so what we end up with, looking at the bundle over an open neighborhood , is:
The projection map is the obvious projection onto the first factor. These assemble into a bundle over .
We should think of these bundles as "shifting up" the nilpotent part of (which are invisible at the level of ordinary points in ) by the algebra . Writing them this way makes it clear that this is functorial in the superpoints : given choices and , and any morphism between the corresponding and , it's easy to see how we get maps between these bundles.
Now, maps between supermanifolds are the same thing as natural transformations between the functors of points. These include maps of the base manifolds, along with maps between the total spaces of all these bundles. More, this tower of maps must commute with all those bundle maps coming from algebra maps . (In particular, since , the ordinary point, is one of these, they have to commute with the projection to .) These conditions may be quite restrictive, but it leaves us with, at least, a quite concrete image of what maps of supermanifolds
Super-Poincaré Group
One of the main settings where super-geometry appears is in so-called "supersymmetric" field theories, which is a concept that makes sense when fields live on supermanifolds. Supersymmetry, and symmetries associated to super-Lie groups, is exactly the kind of thing that John has worked on. A super-Lie group, of course, is a supermanifold that has the structure of a group (i.e. it's a Lie group in the topos of presheaves over the site of super-points – so the discussion above means it can be thought of as a big tower of Lie groups, all bundles over a Lie group ).
In fact, John has mostly worked with super-Lie algebras (and the connection between these and division algebras, though that's another story). These are -graded algebras with a Lie bracket whose commutation properties are the graded version of those for an ordinary Lie algebra. But part of the value of the framework above is that we can simply borrow results from Lie theory for manifolds, import it into the new topos , and know at once that super-Lie algebras integrate up to super-Lie groups in just the same way that happens in the old topos (of sets).
Supersymmetry refers to a particular example, namely the "super-Poincaré group". Just as the Poincaré group is the symmetry group of Minkowski space, a 4-manifold with a certain metric on it, the super-Poincaré group has the same relation to a certain supermanifold. (There are actually a few different versions, depending on the odd dimension.) The algebra is generated by infinitesimal translations and boosts, plus some "translations" in fermionic directions, which generate the odd part of the algebra.
Now, symmetry in a quantum theory means that this algebra (or, on integration, the corresponding group) acts on the Hilbert space of possible states of the theory: that is, the space of states is actually a representation of this algebra. In fact, to make sense of this, we need a super-Hilbert space (i.e. a graded one). The even generators of the algebra then produce grade-preserving self-maps of , and the odd generators produce grade-reversing ones. (This fact that there are symmetries which flip the "bosonic" and "fermionic" parts of the total is why supersymmetric theories have "superpartners" for each particle, with the opposite parity, since particles are labelled by irreducible representations of the Poincaré group and the gauge group).
To date, so far as I know, there's no conclusive empirical evidence that real quantum field theories actually exhibit supersymmetry, such as detecting actual super-partners for known particles. Even if not, however, it still has some use as a way of developing toy models of quite complicated theories which are more tractable than one might expect, precisely because they have lots of symmetry. It's somewhat like how it's much easier to study computationally difficult theories like gravity by assuming, for instance, spherical symmetry as an extra assumption. In any case, from a mathematician's point of view, this sort of symmetry is just a particularly simple case of symmetries for theories which live on noncommutative backgrounds, which is quite an interesting topic in its own right. As usual, physics generates lots of math which remains both true and interesting whether or not it applies in the way it was originally suggested.
In any case, what the functor-of-points viewpoint suggests is that ordinary and super- symmetries are just two special cases of "symmetries of a field theory" in two different toposes. Understanding these and other examples from this point of view seems to give a different understanding of what "symmetry", one of the most fundamental yet slippery concepts in mathematics and science, actually means.
Lecture Series on Sheaves and Motives – Part I (Sheaves as Spaces)
Posted by Jeffrey Morton under category theory, geometry, sheaves, smooth spaces, Supergeometry, toposes
This semester, Susama Agarwala and I have been sharing a lecture series for graduate students. (A caveat: there are lecture notes there, by student request, but they're rough notes, and contain some mistakes, omissions, and represent a very selective view of the subject.) Being a "topics" course, it consists of a few different sections, loosely related, which revolve around the theme of categorical tools which are useful for geometry (and topology).
What this has amounted to is: I gave a half-semester worth of courses on toposes, sheaves, and the basics of derived categories. Susama is now giving the second half, which is about motives. This post will talk about the part of the course I gave. Though this was a whole series of lectures which introduced all these topics more or less carefully, I want to focus here on the part of the lecture which built up to a discussion of sheaves as spaces. Nothing here, or in the two posts to follow, is particularly new, but they do amount to a nice set of snapshots of some related ideas.
Coming up soon: John Huerta is currently visiting Hamburg, and on July 8, he gave a guest-lecture which uses some of this machinery to talk about supermanifolds, which will be the subject of the next post in this series. In a later post, I'll talk about Susama's lectures about motives and how this relates to the discussion here (loosely).
Grothendieck Toposes
The first half of our course was about various aspects of Grothendieck toposes. In the first lecture, I talked about "Elementary" (or Lawvere-Tierney) toposes. One way to look at these is to say that they are categories which have all the properties of the category of Sets which make it useful for doing most of ordinary mathematics. Thus, a topos in this sense is a category with a bunch of properties – there are various equivalent definitions, but for example, toposes have all finite limits (in particular, products), and all colimits.
More particularly, they have "power objects". That is, if and are objects of , then there is an object , with an "evaluation map" , which makes it possible to think of as the object of "morphisms from A to B".
The other main thing a topos has is a "subobject classifier". Now, a subobject of is an equivalence class of monomorphisms into – think of sets, where this amounts to specifying the image, and the monomorphisms are the various inclusions which pick out the same subset as their image. A classifier for subobjects should be thought of as something like the two-element set is , whose elements we can tall "true" and "false". Then every subset of corresponds to a characteristic function . In general, a subobject classifies is an object together with a map from the terminal object, , such that every inclusion of subobject is a pullback of along a characteristic function.
Now, elementary toposes were invented chronologically later than Grothendieck toposes, which are a special class of example. These are categories of sheaves on (Grothendieck) sites. A site is a category together with a "topology" , which is a rule which, for each , picks out , a set of collections of maps into , called seives for . They collections have to satisfy certain conditions, but the idea can be understood in terms of the basic example, . Given a topological space, is the category whose objects are the open sets , and the morphisms are all the inclusions. Then that each collection in is an open cover of – that is, a bunch of inclusions of open sets, which together cover all of in the usual sense.
(This is a little special to , where every map is an inclusion – in a general site, the need to be closed under composition with any other morphism (like an ideal in a ring). So for instance, , the category of topological spaces, the usual choice of consists of all collections of maps which are jointly surjective.)
The point is that a presheaf on is just a functor . That is, it's a way of assigning a set to each . So, for instance, for either of the cases we just mentioned, one has , which assigns to each open set the set of all bounded functions on , and to every inclusion the restriction map. Or, again, one has , which assigns the set of all continuous functions.
These two examples illustrate the condition which distinguishes those presheaves which are sheaves – namely, those which satisfy some "gluing" conditions. Thus, suppose we're, given an open cover , and a choice of one element from each , which form a "matching family" in the sense that they agree when restricted to any overlaps. Then the sheaf condition says that there's a unique "amalgamation" of this family – that is, one element which restricts to all the under the maps .
Sheaves as Generalized Spaces
There are various ways of looking at sheaves, but for the purposes of the course on categorical methods in geometry, I decided to emphasize the point of view that they are a sort of generalized spaces.
The intuition here is that all the objects and morphisms in a site have corresponding objects and morphisms in . Namely, the objects appear as the representable presheaves, , and the morphisms show up as the induced natural transformations between these functors. This map is called the Yoneda embedding. If is at all well-behaved (as it is in all the examples we're interested in here), these presheaves will always be sheaves: the image of lands in .
In this case, the Yoneda embedding embeds as a sub-category of . What's more, it's a full subcategory: all the natural transformations between representable presheaves come from the morphisms of -objects in a unique way. So is, in this sense, a generalization of itself.
More precisely, it's the Yoneda lemma which makes sense of all this. The idea is to start with the way ordinary -objects (from now on, just call them "spaces") become presheaves: they become functors which assign to each the set of all maps into . So the idea is to turn this around, and declare that even non-representable sheaves should have the same interpretation. The Yoneda Lemma makes this a sensible interpretation: it says that, for any presheaf , and any , the set is naturally isomorphic to : that is, literally is the collection of morphisms from (or rather, its image under the Yoneda embedding) and a "generalized space" . (See also Tom Leinster's nice discussion of the Yoneda Lemma if this isn't familiar.) We describe as a "probe" object: one probes the space by mapping into it in various ways. Knowing the results for all tells you all about the "space" . (Thus, for instance, one can get all the information about the homotopy type of a space if you know all the maps into it from spheres of all dimensions up to homotopy. So spheres are acting as "probes" to reveal things about the space.)
Furthermore, since is a topos, it is often a nicer category than the one you start with. It has limits and colimits, for instance, which the original category might not have. For example, if the kind of spaces you want to generalize are manifolds, one doesn't have colimits, such as the space you get by gluing together two lines at a point. The sheaf category does. Likewise, the sheaf category has exponentials, and manifolds don't (at least not without the more involved definitions needed to allow infinite-dimensional manifolds).
These last remarks about manifolds suggest the motivation for the first example…
Diffeological Spaces
The lecture I gave about sheaves as spaces used this paper by John Baez and Alex Hoffnung about "smooth spaces" (they treat Souriau's diffeological spaces, and the different but related Chen spaces in the same framework) to illustrate the point. They describe In that case, the objects of the sites are open (or, for Chen spaces, convex) subsets of , for all choices of , the maps are the smooth maps in the usual sense (i.e. the sense to be generalized), and the covers are jointly surjective collections of maps.
Now, that example is a somewhat special situation: they talk about concrete sheaves, on concrete sites, and the resulting categories are only quasitoposes – a slightly weaker condition than being a topos, but one still gets a useful collection of spaces, which among other things include all manifolds. The "concreteness" condition – that has a terminal object to play the role of "the point". Being a concrete sheaf then means that all the "generalized spaces" have an underlying set of points (namely, the set of maps from the point object), and that all morphisms between the spaces are completely determined by what they do to the underlying set of points. This means that the "spaces" really are just sets with some structure.
Now, if the site happens to be , then we have a slightly intuition: the "generalized" spaces are something like generalized bundles over , and the "probes" are now sections of such a bundle. A simple example would be an actual sheaf of functions: these are sections of a trivial bundle, since, say, -valued functions are sections of the bundle . Given a nontrivial bundle , there is a sheaf of sections – on each , one gets to be all the one-sided inverses which are one-sided inverses of . For a generic sheaf, we can imagine a sort of "generalized bundle" over .
Another example of the fact that sheaves can be seen as spaces is the category of schemes: these are often described as topological spaces which are themselves equipped with a sheaf of rings. "Scheme" is to algebraic geometry what "manifold" is to differential geometry: a kind of space which looks locally like something classical and familiar. Schemes, in some neighborhood of each point, must resemble varieties – i.e. the locus of zeroes of some algebraic function on $\mathbb{k}^n$. For varieties, the rings attached to neighborhoods are rings of algebraic functions on this locus, which will be a quotient of the ring of polynomials.
But another way to think of schemes is as concrete sheaves on a site whose objects are varieties and whose morphisms are algebraic maps. This is dual to the other point of view, just as thinking of diffeological spaces as sheaves is dual to a viewpoint in which they're seen as topological spaces equipped with a notion of "smooth function".
(Some general discussion of this in a talk by Victor Piercey)
These two viewpoints (defining the structure of a space by a class of maps into it, or by a class of maps out of it) in principle give different definitions. To move between them, you really need everything to be concrete: the space has an underlying set, the set of probes is a collection of real set-functions. Likewise, for something like a scheme, you'd need the ring for any open set to be a ring of actual set-functions. In this case, one can move between the two descriptions of the space as long as there is a pre-existing concept of the right kind of function on the "probe" spaces. Given a smooth space, say, one can define a sheaf of smooth functions on each open set by taking those whose composites with every probe are smooth. Conversely, given something like a scheme, where the structure sheaf is of function rings on each open subspace (i.e. the sheaf is representable), one can define the probes from varieties to be those which give algebraic functions when composed with every function in these rings. Neither of these will work in general: the two approaches define different categories of spaces (in the smooth context, see Andrew Stacey's comparison of various categories of smooth spaces, defined either by specifying the smooth maps in, or out, or both). But for very concrete situations, they fit together neatly.
The concrete case is therefore nice for getting an intuition for what it means to think of sheaves as spaces. For sheaves which aren't concrete, morphisms aren't determined by what they do to the underlying points i.e. the forgetful "underlying set" functor isn't faithful. Here, we might think of a "generalized space" which looks like two copies of the same topological space: the sheaf gives two different elements of for each map of underlying sets. We could think of such generalized space as built from sets equipped with extra "stuff" (say, a set consisting of pairs – so it consists of a "blue" copy of X and a "green" copy of X, but the underlying set functor ignores the colouring.
Still, useful as they may be to get a first handle on this concept of sheaf as generalized space, one shouldn't rely on these intuitions too much: if doesn't even have a "point" object, there is no underlying set functor at all. Eventually, one simply has to get used to the idea of defining a space by the information revealed by probes.
In the next post, I'll talk more about this in the context of John Huerta's guest lecture, applying this idea to the category of supermanifolds, which can be seen as manifolds built internal to the topos of (pre)sheaves on a site whose objects are called "super-points".
"Observer Space": Cartan Geometry and Lifting General Relativity
Posted by Jeffrey Morton under gauge theory, geometry, physics, quantization, talks
This entry is a by-special-request blog, which Derek Wise invited me to write for the blog associated with the International Loop Quantum Gravity Seminar, and it will appear over there as well. The ILQGS is a long-running regular seminar which runs as a teleconference, with people joining in from various countries, on various topics which are more or less closely related to Loop Quantum Gravity and the interests of people who work on it. The custom is that when someone gives a talk, someone else writes up a description of the talk for the ILQGS blog, and Derek invited me to write up a description of his talk. The audio file of the talk itself is available in .aiff and .wav formats, and the slides are here.
The talk that Derek gave was based on a project of his and Steffen Gielen's, which has taken written form in a few papers (two shorter ones, "Spontaneously broken Lorentz symmetry for Hamiltonian gravity", "Linking Covariant and Canonical General Relativity via Local Observers", and a new, longer one called "Lifting General Relativity to Observer Space").
The key idea behind this project is the notion of "observer space", which is exactly what it sounds like: a space of all observers in a given universe. This is easiest to picture when one has a spacetime – a manifold with a Lorentzian metric, – to begin with. Then an observer can be specified by choosing a particular point in spacetime, as well as a unit future-directed timelike vector . This vector is a tangent to the observer's worldline at . The observer space is therefore a bundle over , the "future unit tangent bundle". However, using the notion of a "Cartan geometry", one can give a general definition of observer space which makes sense even when there is no underlying .
The result is a surprising, relatively new physical intuition is that "spacetime" is a local and observer-dependent notion, which in some special cases can be extended so that all observers see the same spacetime. This is somewhat related to the relativity of locality, which I've blogged about previously. Geometrically, it is similar to the fact that a slicing of spacetime into space and time is not unique, and not respected by the full symmetries of the theory of Relativity, even for flat spacetime (much less for the case of General Relativity). Similarly, we will see a notion of "observer space", which can sometimes be turned into a bundle over an objective spacetime , but not in all cases.
So, how is this described mathematically? In particular, what did I mean up there by saying that spacetime becomes observer-dependent?
Cartan Geometry
The answer uses Cartan geometry, which is a framework for differential geometry that is slightly broader than what is commonly used in physics. Roughly, one can say "Cartan geometry is to Klein geometry as Riemannian geometry is to Euclidean geometry". The more familiar direction of generalization here is the fact that, like Riemannian geometry, Cartan is concerned with manifolds which have local models in terms of simple, "flat" geometries, but which have curvature, and fail to be homogeneous. First let's remember how Klein geometry works.
Klein's Erlangen Program, carried out in the mid-19th-century, systematically brought abstract algebra, and specifically the theory of Lie groups, into geometry, by placing the idea of symmetry in the leading role. It describes "homogeneous spaces", which are geometries in which every point is indistinguishable from every other point. This is expressed by the existence of a transitive action of some Lie group of all symmetries on an underlying space. Any given point will be fixed by some symmetries, and not others, so one also has a subgroup . This is the "stabilizer subgroup", consisting of all symmetries which fix . That the space is homogeneous means that for any two points , the subgroups and are conjugate (by a symmetry taking to ). Then the homogeneous space, or Klein geometry, associated to is, up to isomorphism, just the same as the quotient space of the obvious action of on .
The advantage of this program is that it has a great many examples, but the most relevant ones for now are:
-dimensional Euclidean space. the Euclidean group is precisely the group of transformations that leave the data of Euclidean geometry, lengths and angles, invariant. It acts transitively on . Any point will be fixed by the group of rotations centred at that point, which is a subgroup of isomorphic to . Klein's insight is to reverse this: we may define Euclidean space by .
-dimensional Minkowski space. Similarly, we can define this space to be . The Euclidean group has been replaced by the Poincaré group, and rotations by the Lorentz group (of rotations and boosts), but otherwise the situation is essentially the same.
de Sitter space. As a Klein geometry, this is the quotient . That is, the stabilizer of any point is the Lorentz group – so things look locally rather similar to Minkowski space around any given point. But the global symmetries of de Sitter space are different. Even more, it looks like Minkowski space locally in the sense that the Lie algebras give representations and are identical, seen as representations of . It's natural to identify them with the tangent space at a point. de Sitter space as a whole is easiest to visualize as a 4D hyperboloid in . This is supposed to be seen as a local model of spacetime in a theory in which there is a cosmological constant that gives empty space a constant negative curvature.
anti-de Sitter space. This is similar, but now the quotient is – in fact, this whole theory goes through for any of the last three examples: Minkowski; de Sitter; and anti-de Sitter, each of which acts as a "local model" for spacetime in General Relativity with the cosmological constant, respectively: zero; positive; and negative.
Now, what does it mean to say that a Cartan geometry has a local model? Well, just as a Lorentzian or Riemannian manifold is "locally modelled" by Minkowski or Euclidean space, a Cartan geometry is locally modelled by some Klein geometry. This is best described in terms of a connection on a principal -bundle, and the associated -bundle, over some manifold . The crucial bundle in a Riemannian or Lorenztian geometry is the frame bundle: the fibre over each point consists of all the ways to isometrically embed a standard Euclidean or Minkowski space into the tangent space. A connection on this bundle specifies how this embedding should transform as one moves along a path. It's determined by a 1-form on , valued in the Lie algebra of .
Given a parametrized path, one can apply this form to the tangent vector at each point, and get a Lie algebra-valued answer. Integrating along the path, we get a path in the Lie group (which is independent of the parametrization). This is called a "development" of the path, and by applying the -values to the model space , we see that the connection tells us how to move through a copy of as we move along the path. The image this suggests is of "rolling without slipping" – think of the case where the model space is a sphere. The connection describes how the model space "rolls" over the surface of the manifold . Curvature of the connection measures the failure to commute of the processes of rolling in two different directions. A connection with zero curvature describes a space which (locally at least) looks exactly like the model space: picture a sphere rolling against its mirror image. Transporting the sphere-shaped fibre around any closed curve always brings it back to its starting position. Now, curvature is defined in terms of transports of these Klein-geometry fibres. If curvature is measured by the development of curves, we can think of each homogeneous space as a flat Cartan geometry with itself as a local model.
This idea, that the curvature of a manifold depends on the model geometry being used to measure it, shows up in the way we apply this geometry to physics.
Gravity and Cartan Geometry
MacDowell-Mansouri gravity can be understood as a theory in which General Relativity is modelled by a Cartan geometry. Of course, a standard way of presenting GR is in terms of the geometry of a Lorentzian manifold. In the Palatini formalism, the basic fields are a connection and a vierbein (coframe field) called , with dynamics encoded in the Palatini action, which is the integral over of , where is the curvature 2-form for .
This can be derived from a Cartan geometry, whose model geometry is de Sitter space . Then MacDowell-Mansouri gravity gets and by splitting the Lie algebra as . This "breaks the full symmetry" at each point. Then one has a fairly natural action on the -connection:
Here, is the part of the curvature of the big connection. The splitting of the connection means that , and the action above is rewritten, up to a normalization, as the Palatini action for General Relativity (plus a topological term, which has no effect on the equations of motion we get from the action). So General Relativity can be written as the theory of a Cartan geometry modelled on de Sitter space.
The cosmological constant in GR shows up because a "flat" connection for a Cartan geometry based on de Sitter space will look (if measured by Minkowski space) as if it has constant curvature which is exactly that of the model Klein geometry. The way to think of this is to take the fibre bundle of homogeneous model spaces as a replacement for the tangent bundle to the manifold. The fibre at each point describes the local appearance of spacetime. If empty spacetime is flat, this local model is Minkowski space, , and one can really speak of tangent "vectors". The tangent homogeneous space is not linear. In these first cases, the fibres are not vector spaces, precisely because the large group of symmetries doesn't contain a group of translations, but they are Klein geometries constructed in just the same way as Minkowski space. Thus, the local description of the connection in terms of -valued forms can be treated in the same way, regardless of which Klein geometry occurs in the fibres. In particular, General Relativity, formulated in terms of Cartan geometry, always says that, in the absence of matter, the geometry of space is flat, and the cosmological constant is included naturally by the choice of which Klein geometry is the local model of spacetime.
Observer Space
The idea in defining an observer space is to combine two symmetry reductions into one. The reduction from to gives de Sitter space, as a model Klein geometry, which reflects the "symmetry breaking" that happens when choosing one particular point in spacetime, or event. Then, the reduction of to similarly reflects the symmetry breaking that occurs when one chooses a specific time direction (a future-directed unit timelike vector). These are the tangent vectors to the worldline of an observer at the chosen point, so the model Klein geometry, is the space of such possible observers. The stabilizer subgroup for a point in this space consists of just the rotations of space around the corresponding observer – the boosts in translate between observers. So locally, choosing an observer amounts to a splitting of the model spacetime at the point into a product of space and time. If we combine both reductions at once, we get the 7-dimensional Klein geometry . This is just the future unit tangent bundle of de Sitter space, which we think of as a homogeneous model for the "space of observers"
A general observer space , however, is just a Cartan geometry modelled on . This is a 7-dimensional manifold, equipped with the structure of a Cartan geometry. One class of examples are exactly the future unit tangent bundles to 4-dimensional Lorentzian spacetimes. In these cases, observer space is naturally a contact manifold: that is, it's an odd-dimensional manifold equipped with a 1-form , the contact form, which is such that the top-dimensional form is nowhere zero. This is the odd-dimensional analog of a symplectic manifold. Contact manifolds are, intuitively, configuration spaces of systems which involve "rolling without slipping" – for instance, a sphere rolling on a plane. In this case, it's better to think of the local space of observers which "rolls without slipping" on a spacetime manifold .
Now, Minkowski space has a slicing into space and time – in fact, one for each observer, who defines the time direction, but the time coordinate does not transform in any meaningful way under the symmetries of the theory, and different observers will choose different ones. In just the same way, the homogeneous model of observer space can naturally be written as a bundle . But a general observer space may or may not be a bundle over an ordinary spacetime manifold, . Every Cartan geometry gives rise to an observer space as the bundle of future-directed timelike vectors, but not every Cartan geometry is of this form, in any natural way. Indeed, without a further condition, we can't even reconstruct observer space as such a bundle in an open neighborhood of a given observer.
This may be intuitively surprising: it gives a perfectly concrete geometric model in which "spacetime" is relative and observer-dependent, and perhaps only locally meaningful, in just the same way as the distinction between "space" and "time" in General Relativity. It may be impossible, that is, to determine objectively whether two observers are located at the same base event or not. This is a kind of "Relativity of Locality" which is geometrically much like the by-now more familiar Relativity of Simultaneity. Each observer will reach certain conclusions as to which observers share the same base event, but different observers may not agree. The coincident observers according to a given observer are those reached by a good class of geodesics in moving only in directions that observer sees as boosts.
When one can reconstruct , two observers will agree whether or not they are coincident. This extra condition which makes this possible is an integrability constraint on the action of the Lie algebra (in our main example, ) on the observer space . In this case, the fibres of the bundle are the orbits of this action, and we have the familiar world of Relativity, where simultaneity may be relative, but locality is absolute.
Lifting Gravity to Observer Space
Apart from describing this model of relative spacetime, another motivation for describing observer space is that one can formulate canonical (Hamiltonian) GR locally near each point in such an observer space. The goal is to make a link between covariant and canonical quantization of gravity. Covariant quantization treats the geometry of spacetime all at once, by means of a Lagrangian action functional. This is mathematically appealing, since it respects the symmetry of General Relativity, namely its diffeomorphism-invariance. On the other hand, it is remote from the canonical (Hamiltonian) approach to quantization of physical systems, in which the concept of time is fundamental. In the canonical approach, one gets a Hilbert space by quantizing the space of states of a system at a given point in time, and the Hamiltonian for the theory describes its evolution. This is problematic for diffeomorphism-, or even Lorentz-invariance, since coordinate time depends on a choice of observer. The point of observer space is that we consider all these choices at once. Describing GR in is both covariant, and based on (local) choices of time direction.
This is easiest to describe in the case of a bundle . Then a "field of observers" to be a section of the bundle: a choice, at each base event in , of an observer based at that event. A field of observers may or may not correspond to a particular decomposition of spacetime into space evolving in time, but locally, at each point in , it always looks like one. The resulting theory describes the dynamics of space-geometry over time, as seen locally by a given observer. In this case, a Cartan connection on observer space is described by to a -valued form. This decomposes into four Lie-algebra valued forms, interpreted as infinitesimal transformations of the model observer by: (1) spatial rotations; (2) boosts; (3) spatial translations; (4) time translation. The four-fold division is based on two distinctions: first, between the base event at which the observer lives, and the choice of observer (i.e. the reduction of to , which symmetry breaking entails choosing a point); and second, between space and time (i.e. the reduction of to , which symmetry breaking entails choosing a time direction).
This splitting, along the same lines as the one in MacDowell-Mansouri gravity described above, suggests that one could lift GR to a theory on an observer space . This amount to describing fields on and an action functional, so that the splitting of the fields gives back the usual fields of GR on spacetime, and the action gives back the usual action. This part of the project is still under development, but this lifting has been described. In the case when there is no "objective" spacetime, the result includes some surprising new fields which it's not clear how to deal with, but when there is an objective spacetime, the resulting theory looks just like GR.
Higher Structures in China
Posted by Jeffrey Morton under algebra, categorification, cohomology, conferences, double categories, geometry, groupoids, quantization
Since the last post, I've been busily attending some conferences, as well as moving to my new job at the University of Hamburg, in the Graduiertenkolleg 1670, "Mathematics Inspired by String Theory and Quantum Field Theory". The week before I started, I was already here in Hamburg, at the conference they were organizing "New Perspectives in Topological Quantum Field Theory". But since I last posted, I was also at the 20th Oporto Meeting on Geometry, Topology, and Physics, as well as the third Higher Structures in China workshop, at Jilin University in Changchun. Right now, I'd like to say a few things about some of the highlights of that workshop.
Higher Structures in China III
So last year I had a bunch of discussions I had with Chenchang Zhu and Weiwei Pan, who at the time were both in Göttingen, about my work with Jamie Vicary, which I wrote about last time when the paper was posted to the arXiv. In that, we showed how the Baez-Dolan groupoidification of the Heisenberg algebra can be seen as a representation of Khovanov's categorification. Chenchang and Weiwei and I had been talking about how these ideas might extend to other examples, in particular to give nice groupoidifications of categorified Lie algebras and quantum groups.
That is still under development, but I was invited to give a couple of talks on the subject at the workshop. It was a long trip: from Lisbon, the farthest-west of the main cities of (continental) Eurasia all the way to one of the furthest-East. (Not quite the furthest, but Changchun is in the northeast of China, just a few hours north of Korea, and it took just about exactly 24 hours including stopovers to get there). It was a long way to go for a three day workshop, but as there were also three days of a big excursion to Changbai Mountain, just on the border with North Korea, for hiking and general touring around. So that was a sort of holiday, with 11 other mathematicians. Here is me with Dany Majard, in a national park along the way to the mountains:
Here's me with Alex Hoffnung, on Changbai Mountain (in the background is China):
And finally, here's me a little to the left of the previous picture, where you can see into the volcanic crater. The lake at the bottom is cut out of the picture, but you can see the crater rim, of which this particular part is in North Korea, as seen from China:
Well, that was fun!
Anyway, the format of the workshop involved some talks from foreigners and some from locals, with a fairly big local audience including a good many graduate students from Jilin University. So they got a chance to see some new work being done elsewhere – mostly in categorification of one kind or another. We got a chance to see a little of what's being done in China, although not as much as we might have. I gather that not much is being done yet that fit the theme of the workshop, which was part of the reason to organize the workshop, and especially for having a session aimed specially at the graduate students.
Categorified Algebra
This is a sort of broad term, but certainly would include my own talk. The essential point is to show how the groupoidification of the Heisenberg algebra is a representation of Khovanov's categorification of the same algebra, in a particular 2-category. The emphasis here is on the fact that it's a representation in a 2-category whose objects are groupoids, but whose morphisms aren't just functors, but spans of functors – that is, composites of functors and co-functors. This is a pretty conservative weakening of "representations on categories" – but it lets one build really simple combinatorial examples. I've discussed this general subject in recent posts, so I won't elaborate too much. The lecture notes are here, if you like, though – they have more detail than my previous post, but are less technical than the paper with Jamie Vicary.
Aaron Lauda gave a nice introduction to the program of categorifying quantum groups, mainly through the example of the special case , somewhat along the same lines as in his introductory paper on the subject. The story which gives the motivation is nice: one has knot invariants such as the Jones polynomial, based on representations of groups and quantum groups. The Jones polynomial can be categorified to give Khovanov homology (which assigns a complex to a knot, whose graded Euler characteristic is the Jones polynomial) – but also assigns maps of complexes to cobordisms of knots. One then wants to categorify the representation theory behind it – to describe actions of, for instance, quantum on categories. This starting point is nice, because it can work by just mimicking the construction of and representations in terms of weight spaces: one gets categories which correspond to the "weight spaces" (usually just vector spaces), and the and operators give functors between them, and so forth.
Finding examples of categories and functors with this structure, and satisfying the right relations, gives "categorified representations" of the algebra – the monoidal categories of diagrams which are the "categorifications of the algebra" then are seen as the abstraction of exactly which relations these are supposed to satisfy. One such example involves flag varieties. A flag, as one might eventually guess from the name, is a nested collection of subspaces in some -dimensional space. A simple example is the Grassmannian , which is the space of all 1-dimensional subspaces of (i.e. the projective space ), which is of course an algebraic variety. Likewise, , the space of all -dimensional subspaces of is a variety. The flag variety consists of all pairs , of a -dimensional subspace of , inside a -dimensional subspace (the case calls to mind the reason for the name: a plane intersecting a given line resembles a flag stuck to a flagpole). This collection is again a variety. One can go all the way up to the variety of "complete flags", (where is -dimenisonal), any point of which picks out a subspace of each dimension, each inside the next.
The way this relates to representations is by way of geometric representation theory. One can see those flag varieties of the form as relating the Grassmanians: there are projections and , which act by just ignoring one or the other of the two subspaces of a flag. This pair of maps, by way of pulling-back and pushing-forward functions, gives maps between the cohomology rings of these spaces. So one gets a sequence , and maps between the adjacent ones. This becomes a representation of the Lie algebra. Categorifying this, one replaces the cohomology rings with derived categories of sheaves on the flag varieties – then the same sort of "pull-push" operation through (derived categories of sheaves on) the flag varieties defines functors between those categories. So one gets a categorified representation.
Heather Russell's talk, based on this paper with Aaron Lauda, built on the idea that categorified algebras were motivated by Khovanov homology. The point is that there are really two different kinds of Khovanov homology – the usual kind, and an Odd Khovanov Homology, which is mainly different in that the role played in Khovanov homology by a symmetric algebra is instead played by an exterior (antisymmetric) algebra. The two look the same over a field of characteristic 2, but otherwise different. The idea is then that there should be "odd" versions of various structures that show up in the categorifications of (and other algebras) mentioned above.
One example is the fact that, in the "even" form of those categorifications, there is a natural action of the Nil Hecke algebra on composites of the generators. This is an algebra which can be seen to act on the space of polynomials in commuting variables, , generated by the multiplication operators , and the "divided difference operators" based on the swapping of two adjacent variables. The Hecke algebra is defined in terms of "swap" generators, which satisfy some -deformed variation of the relations that define the symmetric group (and hence its group algebra). The Nil Hecke algebra is so called since the "swap" (i.e. the divided difference) is nilpotent: the square of the swap is zero. The way this acts on the objects of the diagrammatic category is reflected by morphisms drawn as crossings of strands, which are then formally forced to satisfy the relations of the Nil Hecke algebra.
The ODD Nil Hecke algebra, on the other hand, is an analogue of this, but the are anti-commuting, and one has different relations satisfied by the generators (they differ by a sign, because of the anti-commutation). This sort of "oddification" is then supposed to happen all over. The main point of the talk was to to describe the "odd" version of the categorified representation defined using flag varieties. Then the odd Nil Hecke algebra acts on that, analogously to the even case above.
Marco Mackaay gave a couple of talks about the web algebra, describing the results of this paper with Weiwei Pan and Daniel Tubbenhauer. This is the analog of the above, for , describing a diagram calculus which accounts for representations of the quantum group. The "web algebra" was introduced by Greg Kuperberg – it's an algebra built from diagrams which can now include some trivalent vertices, along with rules imposing relations on these. When categorifying, one gets a calculus of "foams" between such diagrams. Since this is obviously fairly diagram-heavy, I won't try here to reproduce what's in the paper – but an important part of is the correspondence between webs and Young Tableaux, since these are labels in the representation theory of the quantum group – so there is some interesting combinatorics here as well.
Algebraic Structures
Some of the talks were about structures in algebra in a more conventional sense.
Jiang-Hua Lu: On a class of iterated Poisson polynomial algebras. The starting point of this talk was to look at Poisson brackets on certain spaces and see that they can be found in terms of "semiclassical limits" of some associative product. That is, the associative product of two elements gives a power series in some parameter (which one should think of as something like Planck's constant in a quantum setting). The "classical" limit is the constant term of the power series, and the "semiclassical" limit is the first-order term. This gives a Poisson bracket (or rather, the commutator of the associative product does). In the examples, the spaces where these things are defined are all spaces of polynomials (which makes a lot of explicit computer-driven calculations more convenient). The talk gives a way of constructing a big class of Poisson brackets (having some nice properties: they are "iterated Poisson brackets") coming from quantum groups as semiclassical limits. The construction uses words in the generating reflections for the Weyl group of a Lie group .
Li Guo: Successors and Duplicators of Operads – first described a whole range of different algebra-like structures which have come up in various settings, from physics and dynamical systems, through quantum field theory, to Hopf algebras, combinatorics, and so on. Each of them is some sort of set (or vector space, etc.) with some number of operations satisfying some conditions – in some cases, lots of operations, and even more conditions. In the slides you can find several examples – pre-Lie and post-Lie algebras, dendriform algebras, quadri- and octo-algebras, etc. etc. Taken as a big pile of definitions of complicated structures, this seems like a terrible mess. The point of the talk is to point out that it's less messy than it appears: first, each definition of an algebra-like structure comes from an operad, which is a formal way of summing up a collection of operations with various "arities" (number of inputs), and relations that have to hold. The second point is that there are some operations, "successor" and "duplicator", which take one operad and give another, and that many of these complicated structures can be generated from simple structures by just these two operations. The "successor" operation for an operad introduces a new product related to old ones – for example, the way one can get a Lie bracket from an associative product by taking the commutator. The "duplicator" operation takes existing products and introduces two new products, whose sum is the previous one, and which satisfy various nice relations. Combining these two operations in various ways to various starting points yields up a plethora of apparently complicated structures.
Dany Majard gave a talk about algebraic structures which are related to double groupoids, namely double categories where all the morphisms are invertible. The first part just defined double categories: graphically, one has horizontal and vertical 1-morphisms, and square 2-morphsims, which compose in both directions. Then there are several special degenerate cases, in the same way that categories have as degenerate cases (a) sets, seen as categories with only identity morphisms, and (b) monoids, seen as one-object categories. Double categories have ordinary categories (and hence monoids and sets) as degenerate cases. Other degenerate cases are 2-categories (horizontal and vertical morphisms are the same thing), and therefore their own special cases, monoidal categories and symmetric monoids. There is also the special degenerate case of a double monoid (and the extra-special case of a double group). (The slides have nice pictures showing how they're all degenerate cases). Dany then talked about some structure of double group(oids) – and gave a list of properties for double groupoids, (such as being "slim" – having at most one 2-cell per boundary configuration – as well as two others) which ensure that they're equivalent to the semidirect product of an abelian group with the "bicrossed product" of two groups and (each of which has to act on the other for this to make sense). He gave the example of the Poincare double group, which breaks down as a triple bicrossed product by the Iwasawa decomposition:
( is certain group of matrices). So there's a unique double group which corresponds to it – it has squares labelled by , and the horizontial and vertical morphisms by elements of and respectively. Dany finished by explaining that there are higher-dimensional analogs of all this – -tuple categories can be defined recursively by internalization ("internal categories in -tuple-Cat"). There are somewhat more sophisticated versions of the same kind of structure, and finally leading up to a special class of -tuple groups. The analogous theorem says that a special class of them is just the same as the semidirect product of an abelian group with an -fold iterated bicrossed product of groups.
Also in this category, Alex Hoffnung talked about deformation of formal group laws (based on this paper with various collaborators). FGL's are are structures with an algebraic operation which satisfies axioms similar to a group, but which can be expressed in terms of power series. (So, in particular they have an underlying ring, for this to make sense). In particular, the talk was about formal group algebras – essentially, parametrized deformations of group algebras – and in particular for Hecke Algebras. Unfortunately, my notes on this talk are mangled, so I'll just refer to the paper.
I'm using the subject-header "physics" to refer to those talks which are most directly inspired by physical ideas, though in fact the talks themselves were mathematical in nature.
Fei Han gave a series of overview talks intorducing "Equivariant Cohomology via Gauged Supersymmetric Field Theory", explaining the Stolz-Teichner program. There is more, using tools from differential geometry and cohomology to dig into these theories, but for now a summary will do. Essentially, the point is that one can look at "fields" as sections of various bundles on manifolds, and these fields are related to cohomology theories. For instance, the usual cohomology of a space is a quotient of the space of closed forms (so the cohomology, , is a quotient of the space of closed -forms – the quotient being that forms differing by a coboundary are considered the same). There's a similar construction for the -theory , which can be modelled as a quotient of the space of vector bundles over . Fei Han mentioned topological modular forms, modelled by a quotient of the space of "Fredholm bundles" – bundles of Banach spaces with a Fredholm operator around.
The first two of these examples are known to be related to certain supersymmetric topological quantum field theories. Now, a TFT is a functor into some kind of vector spaces from a category of -dimensional manifolds and -dimensional cobordisms
Intuitively, it gives a vector space of possible fields on the given space and a linear map on a given spacetime. A supersymmetric field theory is likewise a functor, but one changes the category of "spacetimes" to have both bosonic and fermionic dimension. A normal smooth manifold is a ringed space , since it comes equipped with a sheaf of rings (each open set has an associated ring of smooth functions, and these glue together nicely). Supersymmetric theories work with manifolds which change this sheaf – so a -dimensional space has the sheaf of rings where one introduces some new antisymmetric coordinate functions , the "fermionic dimensions":
Then a supersymmetric TFT is a functor:
(where is the category of supersymmetric topological vector spaces – defined similarly). The connection to cohomology theories is that the classes of such field theories, up to a notion of equivalence called "concordance", are classified by various cohomology theories. Ordinary cohomology corresponds then to -dimensional extended TFT (that is, with 0 bosonic and 1 fermionic dimension), and -theory to a -dimensional extended TFT. The Stoltz-Teichner Conjecture is that the third example (topological modular forms) is related in the same way to a -dimensional extended TFT – so these are the start of a series of cohomology theories related to various-dimension TFT's.
Last but not least, Chris Rogers spoke about his ideas on "Higher Geometric Quantization", on which he's written a number of papers. This is intended as a sort of categorification of the usual ways of quantizing symplectic manifolds. I am still trying to catch up on some of the geometry This is rooted in some ideas that have been discussed by Brylinski, for example. Roughly, the message here is that "categorification" of a space can be thought of as a way of acting on the loop space of a space. The point is that, if points in a space are objects and paths are morphisms, then a loop space shifts things by one categorical level: its points are loops in , and its paths are therefore certain 2-morphisms of . In particular, there is a parallel to the fact that a bundle with connection on a loop space can be thought of as a gerbe on the base space. Intuitively, one can "parallel transport" things along a path in the loop space, which is a surface given by a path of loops in the original space. The local description of this situation says that a 1-form (which can give transport along a curve, by integration) on the loop space is associated with a 2-form (giving transport along a surface) on the original space.
Then the idea is that geometric quantization of loop spaces is a sort of higher version of quantization of the original space. This "higher" version is associated with a form of higher degree than the symplectic (2-)form used in geometric quantization of . The general notion of n-plectic geometry, where the usual symplectic geometry is the case , involves a -form analogous to the usual symplectic form. Now, there's a lot more to say here than I properly understand, much less can summarize in a couple of paragraphs. But the main theorem of the talk gives a relation between n-plectic manifolds (i.e. ones endowed with the right kind of form) and Lie n-algebras built from the complex of forms on the manifold. An important example (a theorem of Chris' and John Baez) is that one has a natural example of a 2-plectic manifold in any compact simple Lie group together with a 3-form naturally constructed from its Maurer-Cartan form.
At any rate, this workshop had a great proportion of interesting talks, and overall, including the chance to see a little more of China, was a great experience!
2-Erlangen Program; Manifold Calculus talk (Pedro Brito)
Posted by Jeffrey Morton under 2-groups, geometry, moduli spaces, sheaves
(Note: WordPress seems to be having some intermittent technical problem parsing my math markup in this post, so please bear with me until it, hopefully, goes away…)
As August is the month in which Portugal goes on vacation, and we had several family visitors toward the end of the summer, I haven't posted in a while, but the term has now started up at IST, and seminars are underway, so there should be some interesting stuff coming up to talk about.
First, I'll point out that that Derek Wise has started a new blog, called simply "Simplicity", which is (I imagine) what it aims to contain: things which seem complex explained so as to reveal their simplicity. Unless I'm reading too much into the title. As of this writing, he's posted only one entry, but a lengthy one that gives a nice explanation of a program for categorified Klein geometries which he's been thinking a bunch about. Klein's program for describing the geometry of homogeneous spaces (such as spherical, Euclidean, and hyperbolic spaces with constant curvature, for example) was developed at Erlangen, and goes by the name "The Erlangen Program". Since Derek is now doing a postdoc at Erlangen, and this is supposed to be a categorification of Klein's approach, he's referred to it the "2-Erlangen Program". There's more discussion about it in a (somewhat) recent post by John Baez at the n-Category Cafe. Both of them note the recent draft paper they did relating a higher gauge theory based on the Poincare 2-group to a theory known as teleparallel gravity. I don't know this theory so well, except that it's some almost-equivalent way of formulating General Relativity
I'll refer you to Derek's own post for full details of what's going on in this approach, but the basic motivation isn't too hard to set out. The Erlangen program takes the view that a homogeneous space is a space (let's say we mean by this a topological space) which "looks the same everywhere". More precisely, there's a group action by some , which we understand to be "symmetries" of the space, which is transitive. Since every point is taken to every other point by some symmetry, the space is "homogeneous". Some symmetries leave certain points where they are – they form the stabilizer subgroup . When the space is homogeneous, it is isomorphic to the coset space, . So Klein's idea is to say that any time you have a Lie group and a closed subgroup , this quotient will be called a "homogeneous space". A familiar example would be Euclidean space, , where is the Euclidean group and is the orthogonal group, but there are plenty of others.
This example indicates what Cartan geometry is all about, though – this is the next natural step after Klein geometry (Edit: Derek's blog now has a visual explanation of Cartan geometry, a.k.a. "generalized hamsterology", new since I originally posted this). We can say that Cartan is to Klein as Riemann is to Euclid. (Or that Cartan is to Riemann as Klein is to Euclid – or if you want to get maybe too-precisely metaphorical, Cartan is the pushout of Klein and Riemann over Euclid). The point is that Riemannian geometry studies manifolds – spaces which are not homogeneous, but look like Euclidean space locally. Cartan geometry studies spaces which aren't homogeneous, but can be locally modelled by Klein geometries. Now, a Riemannian geometry is essentially a manifold with a metric, describing how it locally looks like Euclidean space. An equivalent way to talk about it is a manifold with a bundle of Euclidean spaces (the tangent spaces) with a connection (the Levi-Civita connection associated to the metric). A Cartan geometry can likewise be described as a -bundle with fibre with a connection
Then the point of the "2-Erlangen program" is to develop similar geometric machinery for 2-groups (a.k.a. categorical groups). This is, as usual, a bit more complicated since actions of 2-groups are trickier than group-actions. In their paper, though, the point is to look at spaces which are locally modelled by some sort of 2-Klein geometry which derives from the Poincare 2-group. By analogy with Cartan geometry, one can talk about such Poincare 2-group connections on a space – that is, some kind of "higher gauge theory". This is the sort of framework where John and Derek's draft paper formulates teleparallel gravity. It turns out that the 2-group connection ends up looking like a regular connection with torsion, and this plays a role in that theory. Their draft will give you a lot more detail.
Talk on Manifold Calculus
On a different note, one of the first talks I went to so far this semester was one by Pedro Brito about "Manifold Calculus and Operads" (though he ran out of time in the seminar before getting to talk about the connection to operads). This was about motivating and introducing the Goodwillie Calculus for functors between categories of spaces. (There are various references on this, but see for instance these notes by Hal Sadofsky). In some sense this is a generalization of calculus from functions to functors, and one of the main results Goodwillie introduced with this subject, is a functorial analog of Taylor's theorem. I'd seen some of this before, but this talk was a nice and accessible intro to the topic.
So the starting point for this "Manifold Calculus" is that we'd like to study functors from spaces to spaces (in fact this all applies to spectra, which are more general, but Pedro Brito's talk was focused on spaces). The sort of thing we're talking about is a functor which, given a space , gives a moduli space of some sort of geometric structures we can put on , or of mappings from . The main motivating example he gave was the functor
for some fixed manifold . Given a manifold , this gives the mapping space of all immersions of into .
(Recalling some terminology: immersions are maps of manifolds where the differential is nondegenerate – the induced map of tangent spaces is everywhere injective, meaning essentially that there are no points, cusps, or kinks in the image, but there might be self-intersections. Embeddings are, in addition, local homeomorphisms.)
Studying this functor means, among other things, looking at the various spaces of immersions of each into . We might first ask: can be immersed in at all – in other words, is nonempty?
So, for example, the Whitney Embedding Theorem says that if is at least , then there is an embedding of into (which is therefore also an immersion).
In more detail, we might want to know what is, which tells how many connected components of immersions there are: in other words, distinct classes of immersions which can't be deformed into one another by a family of immersions. Or, indeed, we might ask about all the homotopy groups of , not just the zeroth: what's the homotopy type of ? (Once we have a handle on this, we would then want to vary ).
It turns out this question is manageable, party due to a theorem of Smale and Hirsch, which is a generalization of Gromov's h-principle – the original principle applies to solutions of certain kinds of PDE's, saying that any solution can be deformed to a holomorphic one, so if you want to study the space of solutions up to homotopy, you may as well just study the holomorphic solutions.
The Smale-Hirsch theorem likewise gives a homotopy equivalence of two spaces, one of which is . The other is the space of "formal immersions", called . It consists of all , where is smooth, and is a map of tangent spaces which restricts to , and is injective. These are "formally" like immersions, and indeed has an inclusion into , which happens to be a homotopy equivalence: it induces isomorphisms of all the homotopy groups. These come from homotopies taking each "formal immersion" to some actual immersion. So we've approximated , up to homotopy, by . (This "homotopy" of functors makes sense because we're talking about an enriched functor – the source and target categories are enriched in spaces, where the concepts of homotopy theory are all available).
We still haven't got to manifold calculus, but it will be all about approximating one functor by another – or rather, by a chain of functors which are supposed to be like the Taylor series for a function. The way to get this series has to do with sheafification, so first it's handy to re-describe what the Smale-Hirsch theorem says in terms of sheaves. This means we want to talk about some category of spaces with a Grothendieck topology.
So lets let be the category whose objects are -dimensional manifolds and whose morphisms are embeddings (which, of course, are necessarily codimension 0). Now, the point here is that if is an embedding in , and has an immersion into , this induces an immersion of into . This amounst to saying is a contravariant functor:
That makes a presheaf. What the Smale-Hirsch theorem tells us is that this presheaf is a homotopy sheaf – but to understand that, we need a few things first.
First, what's a homotopy sheaf? Well, the condition for a sheaf says that if we have an open cover of , then
So to say how is a homotopy sheaf, we have to give a topology, which means defining a "cover", which we do in the obvious way – a cover is a collection of morphisms such that the union of all the images is just . The topology where this is the definition of a cover can be called , because it has the property that given any open cover and choice of 1 point in , that point will be in some of the cover.
This is part of a family of topologies, where only allows those covers with the property that given any choice of points in , some open set of the cover contains them all. These conditions, clearly, get increasingly restrictive, so we have a sequence of inclusions (a "filtration"):
Now, with respect to any given one of these topologies , we have the usual situation relating sheaves and presheaves. Sheaves are defined relative to a given topology (i.e. a notion of cover). A presheaf on is just a contravariant functor from (in this case valued in spaces); a sheaf is one which satisfies a descent condition (I've discussed this before, for instance here, when I was running the Stacks Seminar at UWO). The point of a descent condition, for a given topology is that if we can take the values of a functor "locally" – on the various objects of a cover for – and "glue" them to find the value for itself. In particular, given a cover for , and a cover, there's a diagram consisting of the inclusions of all the double-overlaps of sets in the cover into the original sets. Then the descent condition for sheaves of spaces is that
The general fact is that there's a reflective inclusion of sheaves into presheaves (see some discussion about reflective inclusions, also in an earlier post). Any sheaf is a contravariant functor – this is the inclusion of into $latex PSh( \mathcal{E} )$. The reflection has a left adjoint, sheafification, which takes any presheaf in to a sheaf which is the "best approximation" to it. It's the fact this is an adjoint which makes the inclusion "reflective", and provides the sense in which the sheafification is an approximation to the original functor.
The way sheafification works can be worked out from the fact that it's an adjoint to the inclusion, but it also has a fairly concrete description. Given any one of the topologies , we have a whole collection of special diagrams, such as:
(using the usual notation where is the intersection of two sets in a cover, and the maps here are the inclusions of that intersection). This and the various other diagrams involving these inclusions are special, given the topology . The descent condition for a sheaf says that if we take the image of this diagram:
then we can "glue together" the objects and on the overlap to get one on the union. That is, is a sheaf if is a colimit of the diagram above (intuitively, by "gluing on the overlap"). In a presheaf, it would come equipped with some maps into the and : in a sheaf, this object and the maps satisfy some universal property. Sheafification takes a presheaf to a sheaf which does this, essentially by taking all these colimits. More accurately, since these sheaves are valued in spaces, what we really want are homotopy sheaves, where we can replace "colimit" with "homotopy colimit" in the above – which satisfies a universal property only up to homotopy, and which has a slightly weaker notion of "gluing". This (homotopy) sheaf is called because it depends on the topology which we were using to get the class of special diagrams.
One way to think about is that we take the restriction to manifolds which are made by pasting together at most open balls. Then, knowing only this part of the functor , we extend it back to all manifolds by a Kan extension (this is the technical sense in which it's a "best approximation").
Now the point of all this is that we're building a tower of functors that are "approximately" like , agreeing on ever-more-complicated manifolds, which in our motivating example is . Whichever functor we use, we get a tower of functors connected by natural transformations:
This happens because we had that chain of inclusions of the topologies . Now the idea is that if we start with a reasonably nice functor (like for example), then is just the limit of this diagram. That is, it's the universal thing which has a map into each commuting with all these connecting maps in the tower. The tower of approximations – along with its limit (as a diagram in the category of functors) – is what Goodwillie called the "Taylor tower" for . Then we say the functor is analytic if it's just (up to homotopy!) the limit of this tower.
By analogy, think of an inclusion of a vector space with inner product into another such space which has higher dimension. Then there's an orthogonal projection onto the smaller space, which is an adjoint (as a map of inner product spaces) to the inclusion – so these are like our reflective inclusions. So the smaller space can "reflect" the bigger one, while not being able to capture anything in the orthogonal complement. Now suppose we have a tower of inclusions , where each space is of higher dimension, such that each of the is included into in a way that agrees with their maps to each other. Then given a vector , we can take a sequence of approximations in the spaces. If was "nice" to begin with, this series of approximations will eventually at least converge to it – but it may be that our tower of spaces doesn't let us approximate every in this way.
That's precisely what one does in calculus with Taylor series: we have a big vector space of smooth functions, and a tower of spaces we use to approximate. These are polynomial functions of different degrees: first linear, then quadratic, and so forth. The approximations to a function are orthogonal projections onto these smaller spaces. The sequence of approximations, or rather its limit (as a sequence in the inner product space ), is just what we mean by a "Taylor series for ". If is analytic in the first place, then this sequence will converge to it.
The same sort of phenomenon is happening with the Goodwillie calculus for functors: our tower of sheafifications of some functor are just "projections" onto smaller categories (of sheaves) inside the category of all contravariant functors. (Actually, "reflections", via the reflective inclusions of the sheaf categories for each of the topologies ). The Taylor Tower for this functor is just like the Taylor series approximating a function. Indeed, this analogy is fairly close, since the topologies will give approximations of which are in some sense based on points (so-called -excisive functors, which in our terminology here are sheaves in these topologies). Likewise, a degree- polynomial approximation approximates a smooth function, in general in a way that can be made to agree at points.
Finally, I'll point out that I mentioned that the Goodwillie calculus is actually more general than this, and applies not only to spaces but to spectra. The point is that the functor defines a kind of generalized cohomology theory – the cohomology groups for are the . So the point is, functors satisfying the axioms of a generalized cohomology theory are represented by spectra, whereas here is a special case that happens to be a space.
Lots of geometric problems can be thought of as classified by this sort of functor – if , the classifying space of a group, and we drop the requirement that the map be an immersion, then we're looking at the functor that gives the moduli space of -connections on each . The point is that the Goodwillie calculus gives a sense in which we can understand such functors by simpler approximations to them.
Dan Christensen on Diffeological Spaces and Homotopy Theory
Posted by Jeffrey Morton under category theory, geometry, homotopy theory, sheaves, smooth spaces, talks, toposes
So Dan Christensen, who used to be my supervisor while I was a postdoc at the University of Western Ontario, came to Lisbon last week and gave a talk about a topic I remember hearing about while I was there. This is the category of diffeological spaces as a setting for homotopy theory. Just to make things scan more nicely, I'm going to say "smooth space" for "diffeological space" here, although this term is in fact ambiguous (see Andrew Stacey's "Comparative Smootheology" for lots of details about options). There's a lot of information about in Patrick Iglesias-Zimmour's draft-of-a-book.
The point of the category , initially, is that it extends the category of manifolds while having some nicer properties. Thus, while all manifolds are smooth spaces, there are others, which allow to be closed under various operations. These would include taking limits and colimits: for instance, any subset of a smooth space becomes a smooth space, and any quotient of a smooth space by an equivalence relation is a smooth space. Then too, has exponentials (that is, if and are smooth spaces, so is ).
So, for instance, this is a good context for constructing loop spaces: a manifold is a smooth space, and so is its loop space , the space of all maps of the circle into . This becomes important for talking about things like higher cohomology, gerbes, etc. When starting with the category of manifolds, doing this requires you to go off and define infinite dimensional manifolds before can even be defined. Likewise, the irrational torus is hard to talk about as a manifold: you take a torus, thought of as . Then take a direction in with irrational slope, and identify any two points which are translates of each other in along the direction of this line. The orbit of any point is then dense in the torus, so this is a very nasty space, certainly not a manifold. But it's a perfectly good smooth space.
Well, these examples motivate the kinds of things these nice categorical properties allow us to do, but wouldn't deserve to be called a category of "smooth spaces" (Souriau's original name for them) if they didn't allow a notion of smooth maps, which is the basis for most of what we do with manifolds: smooth paths, derivatives of curves, vector fields, differential forms, smooth cohomology, smooth bundles, and the rest of the apparatus of differential geometry. As with manifolds, this notion of smooth map ought to get along with the usual notion for in some sense.
Smooth Spaces
Thus, a smooth (i.e. diffeological) space consists of:
A set (of "points")
A set (of "plots") for every n and open such that:
All constant maps are plots
If is a plot, and is a smooth map, is a plot
If is an open cover of , and is a map, whose restrictions are all plots, so is
A smooth map between smooth spaces is one that gets along with all this structure (i.e. the composite with every plot is also a plot).
These conditions mean that smooth maps agree with the usual notion in , and we can glue together smooth spaces to produce new ones. A manifold becomes a smooth space by taking all the usual smooth maps to be plots: it's a full subcategory (we introduce new objects which aren't manifolds, but no new morphisms between manifolds). A choice of a set of plots for some space is a "diffeology": there can, of course, be many different diffeologies on a given space.
So, in particular, diffeologies can encode a little more than the charts of a manifold. Just for one example, a diffeology can have "stop signs", as Dan put it – points with the property that any smooth map from which passes through them must stop at that point (have derivative zero – or higher derivatives, if you like). Along the same lines, there's a nonstandard diffeology on itself with the property that any smooth map from this into a manifold must have all derivatives zero at the endpoints. This is a better object for defining smooth fundamental groups: you can concatenate these paths at will and they're guaranteed to be smooth.
As a Quasitopos
An important fact about these smooth spaces is that they are concrete sheaves (i.e. sheaves with underlying sets) on the concrete site (i.e. a Grothendieck site where objects have underlying sets) whose objects are the . This implies many nice things about the category . One is that it's a quasitopos. This is almost the same as a topos (in particular, it has limits, colimits, etc. as described above), but where a topos has a "subobject classifier", a quasitopos has a weak subobject classifier (which, perhaps confusingly, is "weak" because it only classifies the strong subobjects).
So remember that a subobject classifier is an object with a map from the terminal object, so that any monomorphism (subobject) is the pullback of along some map (the classifying map). In the topos of sets, this is just the inclusion of a one-element set into a two-element set : the classifying map for a subset sends everything in (i.e. in the image of the inclusion map) to , and everything else to . (That is, it's the characteristic function.) So pulling back
Any topos has one of these – in particular the topos of sheaves on the diffeological site has one. But consists of the concrete sheaves, not all sheaves. The subobject classifier of the topos won't be concrete – but it does have a "concretification", which turns out to be the weak subobject classifier. The subobjects of a smooth space which it classifies (i.e. for which there's a classifying map as above) are exactly the subsets equipped with the subspace diffeology. (Which is defined in the obvious way: the plots are the plots of which land in ).
We'll come back to this quasitopos shortly. The main point is that Dan and his graduate student, Enxin Wu, have been trying to define a different kind of structure on . We know it's good for doing differential geometry. The hope is that it's also good for doing homotopy theory.
As a Model Category
The basic idea here is pretty well supported: naively, one can do a lot of the things done in homotopy theory in : to start with, one can define the "smooth homotopy groups" of a pointed space. It's a theorem by Dan and Enxin that several possible ways of doing this are equivalent. But, for example, Iglesias-Zimmour defines them inductively, so that is the set of path-components of , and is defined recursively using loop spaces, mentioned above. The point is that this all works in much as for topological spaces.
In particular, there are analogs for the for standard theorems like the long exact sequence of homotopy groups for a bundle. Of course, you have to define "bundle" in – it's a smooth surjective map , but saying a diffeological bundle is "locally trivial" doesn't mean "over open neighborhoods", but "under pullback along any plot". (Either of these converts a bundle over a whole space into a bundle over part of , where things are easy to define).
Less naively, the kind of category where homotopy theory works is a model category (see also here). So the project Dan and Enxin have been working on is to give this sort of structure. While there are technicalities behind those links, the essential point is that this means you have a closed category (i.e. with all limits and colimits, which does), on which you've defined three classes of morphisms: fibrations, cofibrations, and weak equivalences. These are supposed to abstract the properties of maps in the homotopy theory of topological spaces – in that case weak equivalences being maps that induce isomorphisms of homotopy groups, the other two being defined by having some lifting properties (i.e. you can lift a homotopy, such as a path, along a fibration).
So to abstract the situation in , these classes have to satisfy some axioms (including an abstract form of the lifting properties). There are slightly different formulations, but for instance, the "2 of 3" axiom says that if two of , latex $g$ and are weak equivalences, so is the third. Or, again, there should be a factorization for any morphism into a fibration and an acyclic cofibration (i.e. one which is also a weak equivalence), and also vice versa (that is, moving the adjective "acyclic" to the fibration). Defining some classes of maps isn't hard, but it tends to be that proving they satisfy all the axioms IS hard.
Supposing you could do it, though, you have things like the homotopy category (where you formally allow all weak equivalences to have inverses), derived functors(which come from a situation where homotopy theory is "modelled" by categories of chain complexes), and various other fairly powerful tools. Doing this in would make it possible to use these things in a setting that supports differential geometry. In particular, you'd have a lot of high-powered machinery that you could apply to prove things about manifolds, even though it doesn't work in the category itself – only in the larger setting .
Dan and Enxin are still working on nailing down some of the proofs, but it appears to be working. Their strategy is based on the principle that, for purposes of homotopy, topological spaces act like simplicial complexes. So they define an affine "simplex", . These aren't literally simplexes: they're affine planes, which we understand as smooth spaces – with the subspace diffeology from . But they behave like simplexes: there are face and degeneracy maps for them, and the like. They form a "cosimplicial object", which we can think of as a functor , where is the simplex category).
Then the point is one can look at, for a smooth space , the smooth singular simplicial set : it's a simplicial set where the sets are sets of smooth maps from the affine simplex into . Likewise, for a simplicial set , there's a smooth space, the "geometric realization" . These give two functors and , which are adjoints ( is the left adjoint). And then, weak equivalences and fibrations being defined in simplicial sets (w.e. are homotopy equivalences of the realization in , and fibrations are "Kan fibrations"), you can just pull the definition back to : a smooth map is a w.e. if its image under is one. The cofibrations get indirectly defined via the lifting properties they need to have relative to the other two classes.
So it's still not completely settled that this definition actually gives a model category structure, but it's pretty close. Certainly, some things are known. For instance, Enxin Wu showed that if you have a fibrant object (i.e. one where the unique map to the terminal object is a fibration – these are generally the "good" objects to define homotopy groups on), then the smooth homotopy groups agree with the simplicial ones for . This implies that for these objects, the weak equivalences are exactly the smooth maps that give isomorphisms for homotopy groups. And so forth. But notice that even some fairly nice objects aren't fibrant: two lines glued together at a point isn't, for instance.
There are various further results. One, a consquences of a result Enxin proved, is that all manifolds are fibrant objects, where these nice properties apply. It's interesting that this comes from the fact that, in , every (connected) manifold is a homogeneous space. These are quotients of smooth groups, – the space is a space of cosets, and is understood to be the stabilizer of the point. Usually one thinks of homogenous spaces as fairly rigid things: the Euclidean plane, say, where is the whole Euclidean group, and the rotations; or a sphere, where is all n-dimensional rotations, and the ones that fix some point on the sphere. (Actually, this gives a projective plane, since opposite points on the sphere get identified. But you get the idea). But that's for Lie groups. The point is that , the space of diffeomorphisms from to itself, is a perfectly good smooth group. Then the subgroup of diffeomorphisms that fix any point is a fine smooth subgroup, and is a homogeneous space in . But that's just , with acting transitively on it – any point can be taken anywhere on .
Cohesive Infinity-Toposes
One further thing I'd mention here is related to a related but more abstract approach to the question of how to incorporate homotopy-theoretic tools with a setting that supports differential geometry. This is the notion of a cohesive topos, and more generally of a cohesive infinity-topos. Urs Schreiber has advocated for this approach, for instance. It doesn't really conflict with the kind of thing Dan was talking about, but it gives a setting for it with lot of abstract machinery. I won't try to explain the details (which anyway I'm not familiar with), but just enough to suggest how the two seem to me to fit together, after discussing it a bit with Dan.
The idea of a cohesive topos seems to start with Bill Lawvere, and it's supposed to characterize something about those categories which are really "categories of spaces" the way is. Intuitively, spaces consist of "points", which are held together in lumps we could call "pieces". Hence "cohesion": the points of a typical space cohere together, rather than being a dust of separate elements. When that happens, in a discrete space, we just say that each piece happens to have just one point in it – but a priori we distinguish the two ideas. So we might normally say that has an "underlying set" functor , and its left adjoint, the "discrete space" functor (left adjoint since set maps from are the same as continuous maps from – it's easy for maps out of to be continuous, since every subset is open).
In fact, any topos of sheaves on some site has a pair of functors like this (where becomes , the "set of global sections" functor), essentially because is the topos of sheaves on a single point, and there's a terminal map from any site into the point. So this adjoint pair is the "terminal geometric morphism" into .
But this omits there are a couple of other things that apply to : has a right adjoint, , where has only and as its open sets. In , all the points are "stuck together" in one piece. On the other hand, itself has a left adjoint, , which gives the set of connected components of a space. is another kind of "underlying set" of a space. So we call a topos "cohesive" when the terminal geometric morphism extends to a chain of four adjoint functors in just this way, which satisfy a few properties that characterize what's happening here. (We can talk about "cohesive sites", where this happens.)
Now isn't exactly a category of sheaves on a site: it's the category of concrete sheaves on a (concrete) site. There is a cohesive topos of all sheaves on the diffeological site. (What's more, it's known to have a model category structure). But now, it's a fact that any cohesive topos has a subcategory of concrete objects (ones where the canonical unit map is mono: roughly, we can characterize the morphisms of by what they do to its points). This category is always a quasitopos (and it's a reflective subcategory of : see the previous post for some comments about reflective subcategories if interested…) This is where fits in here. Diffeologies define a "cohesion" just as topologies do: points are in the same "piece" if there's some plot from a connected part of that lands on both. Why is only a quasitopos? Because in general, the subobject classifier in isn't concrete – but it will have a "concretification", which is the weak subobject classifier I mentioned above.
Where the "infinity" part of "infinity-topos" comes in is the connection to homotopy theory. Here, we replace the topos with the infinity-topos of infinity-groupoids. Then the "underlying" functor captures not just the set of points of a space , but its whole fundamental infinity-groupoid. Its objects are points of , its morphisms are paths, 2-morphisms are homotopies of paths, and so on. All the homotopy groups of live here. So a cohesive inifinity-topos is defined much like above, but with playing the role of , and with that functor replaced by , something which, implicitly, gives all the homotopy groups of . We might look for cohesive infinity-toposes to be given by the (infinity)-categories of simplicial sheaves on cohesive sites.
This raises a point Dan made in his talk over the diffeological site , we can talk about a cube of different structures that live over it, starting with presheaves: . We can add different modifiers to this: the sheaf condition; the adjective "concrete"; the adjective "simplicial". Various combinations of these adjectives (e.g. simplicial presheaves) are known to have a model structure. is the case where we have concrete sheaves on . So far, it hasn't been proved, but it looks like it shortly will be, that this has a model structure. This is a particularly nice one, because these things really do seem a lot like spaces: they're just sets with some easy-to-define and well-behaved (that's what the sheaf condition does) structure on them, and they include all the examples a differential geometer requires, the manifolds.
Relativity of Localization
Posted by Jeffrey Morton under algebra, geometry, phase space, philosophical, physics, quantum gravity, talks
One talk at the workshop was nominally a school talk by Laurent Freidel, but it's interesting and distinctive enough in its own right that I wanted to consider it by itself. It was based on this paper on the "Principle of Relative Locality". This isn't so much a new theory, as an exposition of what ought to happen when one looks at a particular limit of any putative theory that has both quantum field theory and gravity as (different) limits of it. This leads through some ideas, such as curved momentum space, which have been kicking around for a while. The end result is a way of accounting for apparently non-local interactions of particles, by saying that while the particles themselves "see" the interactions as local, distant observers might not.
Whereas Einstein's gravity describes a regime where Newton's gravitational constant is important but Planck's constant is negligible, and (special-relativistic) quantum field theory assumes significant but not. Both of these assume there is a special velocity scale, given by the speed of light , whereas classical mechanics assumes that all three can be neglected (i.e. and are zero, and is infinite). The guiding assumption is that these are all approximations to some more fundamental theory, called "quantum gravity" just because it accepts that both and (as well as ) are significant in calculating physical effects. So GR and QFT incorporate two of the three constants each, and classical mechanics incorporates neither. The "principle of relative locality" arises when we consider a slightly different approximation to this underlying theory.
This approximation works with a regime where and are each negligible, but the ratio is not – this being related to the Planck mass . The point is that this is an approximation with no special length scale ("Planck length"), but instead a special energy scale ("Planck mass") which has to be preserved. Since energy and momentum are different parts of a single 4-vector, this is also a momentum scale; we expect to see some kind of deformation of momentum space, at least for momenta that are bigger than this scale. The existence of this scale turns out to mean that momenta don't add linearly – at least, not unless they're very small compared to the Planck scale.
So what is "Relative Locality"? In the paper linked above, it's stated like so:
Physics takes place in phase space and there is no invariant global projection that gives a description of processes in spacetime. From their measurements local observers can construct descriptions of particles moving and interacting in a spacetime, but different observers construct different spacetimes, which are observer-dependent slices of phase space.
This arises from taking the basic insight of general relativity – the requirement that physical principles should be invariant under coordinate transformations (i.e. diffeomorphisms) – and extend it so that instead of applying just to spacetime, it applies to the whole of phase space. Phase space (which, in this limit where , replaces the Hilbert space of a truly quantum theory) is the space of position-momentum configurations (of things small enough to treat as point-like, in a given fixed approximation). Having no means we don't need to worry about any dynamical curvature of "spacetime" (which doesn't exist), and having no Planck length means we can blithely treat phase space as a manifold with coordinates valued in the real line (which has no special scale). Yet, having a special mass/momentum scale says we should see some purely combined "quantum gravity" effects show up.
The physical idea is that phase space is an accurate description of what we can see and measure locally. Observers (whom we assume small enough to be considered point-like) can measure their own proper time (they "have a clock") and can detect momenta (by letting things collide with them and measuring the energy transferred locally and its direction). That is, we "see colors and angles" (i.e. photon energies and differences of direction). Beyond this, one shouldn't impose any particular theory of what momenta do: we can observe the momenta of separate objects and see what results when they interact and deduce rules from that. As an extension of standard physics, this model is pretty conservative. Now, conventionally, phase space would be the cotangent bundle of spacetime . This model is based on the assumption that objects can be at any point, and wherever they are, their space of possible momenta is a vector space. Being a bundle, with a global projection onto (taking to ), is exactly what this principle says doesn't necessarily obtain. We still assume that phase space will be some symplectic manifold. But we don't assume a priori that momentum coordinates give a projection whose fibres happen to be vector spaces, as in a cotangent bundle.
Now, a symplectic manifold still looks locally like a cotangent bundle (Darboux's theorem). So even if there is no universal "spacetime", each observer can still locally construct a version of "spacetime" by slicing up phase space into position and momentum coordinates. One can, by brute force, extend the spacetime coordinates quite far, to distant points in phase space. This is roughly analogous to how, in special relativity, each observer can put their own coordinates on spacetime and arrive at different notions of simultaneity. In general relativity, there are issues with trying to extend this concept globally, but it can be done under some conditions, giving the idea of "space-like slices" of spacetime. In the same way, we can construct "spacetime-like slices" of phase space.
Geometrizing Algebra
Now, if phase space is a cotangent bundle, momenta can be added (the fibres of the bundle are vector spaces). Some more recent ideas about "quasi-Hamiltonian spaces" (initially introduced by Alekseev, Malkin and Meinrenken) conceive of momenta as "group-valued" – rather than taking values in the dual of some Lie algebra (the way, classically, momenta are dual to velocities, which live in the Lie algebra of infinitesimal translations). For small momenta, these are hard to distinguish, so even group-valued momenta might look linear, but the premise is that we ought to discover this by experiment, not assumption. We certainly can detect "zero momentum" and for physical reasons can say that given two things with two momenta , there's a way of combining them into a combined momentum . Think of doing this physically – transfer all momentum from one particle to another, as seen by a given observer. Since the same momentum at the observer's position can be either coming in or going out, this operation has a "negative" with .
We do have a space of momenta at any given observer's location – the total of all momenta that can be observed there, and this space now has some algebraic structure. But we have no reason to assume up front that is either commutative or associative (let alone that it makes momentum space at a given observer's location into a vector space). One can interpret this algebraic structure as giving some geometry. The commutator for gives a metric on momentum space. This is a bilinear form which is implicitly defined by the "norm" that assigns a kinetic energy to a particle with a given momentum. The associator given by , infinitesimally near where this makes sense, gives a connection. This defines a "parallel transport" of a finite momentum in the direction of a momentum by saying infinitesimally what happens when adding to .
Various additional physical assumptions – like the momentum-space "duals" of the equivalence principle (that the combination of momenta works the same way for all kinds of matter regardless of charge), or the strong equivalence principle (that inertial mass and rest mass energy per the relation are the same) and so forth can narrow down the geometry of this metric and connection. Typically we'll find that it needs to be Lorentzian. With strong enough symmetry assumptions, it must be flat, so that momentum space is a vector space after all – but even with fairly strong assumptions, as with general relativity, there's still room for this "empty space" to have some intrinsic curvature, in the form of a momentum-space "dual cosmological constant", which can be positive (so momentum space is closed like a sphere), zero (the vector space case we usually assume) or negative (so momentum space is hyperbolic).
This geometrization of what had been algebraic is somewhat analogous to what happened with velocities (i.e. vectors in spacetime)) when the theory of special relativity came along. Insisting that the "invariant" scale be the same in every reference system meant that the addition of velocities ceased to be linear. At least, it did if you assume that adding velocities has an interpretation along the lines of: "first, from rest, add velocity v to your motion; then, from that reference frame, add velocity w". While adding spacetime vectors still worked the same way, one had to rephrase this rule if we think of adding velocities as observed within a given reference frame – this became (scaling so and assuming the velocities are in the same direction). When velocities are small relative to , this looks roughly like linear addition. Geometrizing the algebra of momentum space is thought of a little differently, but similar things can be said: we think operationally in terms of combining momenta by some process. First transfer (group-valued) momentum to a particle, then momentum – the connection on momentum space tells us how to translate these momenta into the "reference frame" of a new observer with momentum shifted relative to the starting point. Here again, the special momentum scale (which is also a mass scale since a momentum has a corresponding kinetic energy) is a "deformation" parameter – for momenta that are small compared to this scale, things seem to work linearly as usual.
There's some discussion in the paper which relates this to DSR (either "doubly" or "deformed" special relativity), which is another postulated limit of quantum gravity, a variation of SR with both a special velocity and a special mass/momentum scale, to consider "what SR looks like near the Planck scale", which treats spacetime as a noncommutative space, and generalizes the Lorentz group to a Hopf algebra which is a deformation of it. In DSR, the noncommutativity of "position space" is directly related to curvature of momentum space. In the "relative locality" view, we accept a classical phase space, but not a classical spacetime within it.
Physical Implications
We should understand this scale as telling us where "quantum gravity effects" should start to become visible in particle interactions. This is a fairly large scale for subatomic particles. The Planck mass as usually given is about 21 micrograms: small for normal purposes, about the size of a small sand grain, but very large for subatomic particles. Converting to momentum units with , this is about 6 kg m/s: on the order of the momentum of a kicked soccer ball or so. For a subatomic particle this is a lot.
This scale does raise a question for many people who first hear this argument, though – that quantum gravity effects should become apparent around the Planck mass/momentum scale, since macro-objects like the aforementioned soccer ball still seem to have linearly-additive momenta. Laurent explained the problem with this intuition. For interactions of big, extended, but composite objects like soccer balls, one has to calculate not just one interaction, but all the various interactions of their parts, so the "effective" mass scale where the deformation would be seen becomes where is the number of particles in the soccer ball. Roughly, the point is that a soccer ball is not a large "thing" for these purposes, but a large conglomeration of small "things", whose interactions are "fundamental". The "effective" mass scale tells us how we would have to alter the physical constants to be able to treat it as a "thing". (This is somewhat related to the question of "effective actions" and renormalization, though these are a bit more complicated.)
There are a number of possible experiments suggested in the paper, which Laurent mentioned in the talk. One involves a kind of "twin paradox" taking place in momentum space. In "spacetime", a spaceship travelling a large loop at high velocity will arrive where it started having experienced less time than an observer who remained there (because of the Lorentzian metric) – and a dual phenomenon in momentum space says that particles travelling through loops (also in momentum space) should arrive displaced in space because of the relativity of localization. This could be observed in particle accelerators where particles make several transits of a loop, since the effect is cumulative. Another effect could be seen in astronomical observations: if an observer is observing some distant object via photons of different wavelengths (hence momenta), she might "localize" the object differently – that is, the two photons travel at "the same speed" the whole way, but arrive at different times because the observer will interpret the object as being at two different distances for the two photons.
This last one is rather weird, and I had to ask how one would distinguish this effect from a variable speed of light (predicted by certain other ideas about quantum gravity). How to distinguish such effects seems to be not quite worked out yet, but at least this is an indication that there are new, experimentally detectible, effects predicted by this "relative locality" principle. As Laurent emphasized, once we've noticed that not accepting this principle means making an a priori assumption about the geometry of momentum space (even if only in some particular approximation, or limit, of a true theory of quantum gravity), we're pretty much obliged to stop making that assumption and do the experiments. Finding our assumptions were right would simply be revealing which momentum space geometry actually obtains in the approximation we're studying.
A final note about the physical interpretation: this "relative locality" principle can be discovered by looking (in the relevant limit) at a Lagrangian for free particles, with interactions described in terms of momenta. It so happens that one can describe this without referencing a "real" spacetime: the part of the action that allows particles to interact when "close" only needs coordinate functions, which can certainly exist here, but are an observer-dependent construct. The conservation of (non-linear) momenta is specified via a Lagrange multiplier. The whole Lagrangian formalism for the mechanics of colliding particles works without reference to spacetime. Now, even though all the interactions (specified by the conservation of momentum terms) happen "at one location", in that there will be an observer who sees them happening in the momentum space of her own location. But an observer at a different point may disagree about whether the interaction was local – i.e. happened at a single point in spacetime. Thus "relativity of localization".
Again, this is no more bizarre (mathematically) than the fact that distant, relatively moving, observers in special relativity might disagree about simultaneity, whether two events happened at the same time. They have their own coordinates on spacetime, and transferring between them mixes space coordinates and time coordinates, so they'll disagree whether the time-coordinate values of two events are the same. Similarly, in this phase-space picture, two different observers each have a coordinate system for splitting phase space into "spacetime" and "energy-momentum" coordinates, but switching between them may mix these two pieces. Thus, the two observers will disagree about whether the spacetime-coordinate values for the different interacting particles are the same. And so, one observer says the interaction is "local in spacetime", and the other says it's not. The point is that it's local for the particles themselves (thinking of them as observers). All that's going on here is the not-very-astonishing fact that in the conventional picture, we have no problem with interactions being nonlocal in momentum space (particles with very different momenta can interact as long as they collide with each other)… combined with the inability to globally and invariantly distinguish position and momentum coordinates.
What this means, philosophically, can be debated, but it does offer some plausibility to the claim that space and time are auxiliary, conceptual additions to what we actually experience, which just account for the relations between bits of matter. These concepts can be dispensed with even where we have a classical-looking phase space rather than Hilbert space (where, presumably, this is even more true).
Edit: On a totally unrelated note, I just noticed this post by Alex Hoffnung over at the n-Category Cafe which gives a lot of detail on issues relating to spans in bicategories that I had begun to think more about recently in relation to developing a higher-gauge-theoretic version of the construction I described for ETQFT. In particular, I'd been thinking about how the 2-group analog of restriction and induction for representations realizes the various kinds of duality properties, where we have adjunctions, biadjunctions, and so forth, in which units and counits of the various adjunctions have further duality. This observation seems to be due to Jim Dolan, as far as I can see from a brief note in HDA II. In that case, it's really talking about the star-structure of the span (tri)category, but looking at the discussion Alex gives suggests to me that this theme shows up throughout this subject. I'll have to take a closer look at the draft paper he linked to and see if there's more to say…
HGTQGR – Part IIIb (Workshop)
Posted by Jeffrey Morton under cohomology, conferences, geometry, gerbes, higher gauge theory, physics, quantum gravity, stacks, string theory, talks
As usual, this write-up process has been taking a while since life does intrude into blogging for some reason. In this case, because for a little less than a week, my wife and I have been on our honeymoon, which was delayed by our moving to Lisbon. We went to the Azores, or rather to São Miguel, the largest of the nine islands. We had a good time, roughly like so:
Now that we're back, I'll attempt to wrap up with the summaries of things discussed at the workshop on Higher Gauge Theory, TQFT, and Quantum Gravity. In the previous post I described talks which I roughly gathered under TQFT and Higher Gauge Theory, but the latter really ramifies out in a few different ways. As began to be clear before, higher bundles are classified by higher cohomology of manifolds, and so are gerbes – so in fact these are two slightly different ways of talking about the same thing. I also remarked, in the summary of Konrad Waldorf's talk, the idea that the theory of gerbes on a manifold is equivalent to ordinary gauge theory on its loop space – which is one way to make explicit the idea that categorification "raises dimension", in this case from parallel transport of points to that of 1-dimensional loops. Next we'll expand on that theme, and then finally reach the "Quantum Gravity" part, and draw the connection between this and higher gauge theory toward the end.
Gerbes and Cohomology
The very first workshop speaker, in fact, was Paolo Aschieri, who has done a lot of work relating noncommutative geometry and gravity. In this case, though, he was talking about noncommutative gerbes, and specifically referred to this work with some of the other speakers. To be clear, this isn't about gerbes with noncommutative group , but about gerbes on noncommutative spaces. To begin with, it's useful to express gerbes in the usual sense in the right language. In particular, he explain what a gerbe on a manifold is in concrete terms, giving Hitchin's definition (viz). A gerbe can be described as "a cohomology class" but it's more concrete to present it as:
a collection of line bundles associated with double overlaps . Note this gets an algebraic structure (multiplication of bundles is pointwise , with an inverse given by the dual, , so we can require…
, which helps define…
transition functions on triple overlaps , which are sections of . If this product is trivial, there'd be a 1-cocycle condition here, but we only insist on the 2-cocycle condition…
This is a -gerbe on a commutative space. The point is that one can make a similar definition for a noncommutative space. If the space is associated with the algebra of smooth functions, then a line bundle is a module for , so if is noncommutative (thought of as a "space" ), a "bundle over is just defined to be an -module. One also has to define an appropriate "covariant derivative" operator on this module, and the -product must be defined as well, and will be noncommutative (we can think of it as a deformation of the above). The transition functions are sections: that is, elements of the modules in question. his means we can describe a gerbe in terms of a big stack of modules, with a chosen algebraic structure, together with some elements. The idea then is that gerbes can give an interpretation of cohomology of noncommutative spaces as well as commutative ones.
Mauro Spera spoke about a point of view of gerbes based on "transgressions". The essential point is that an -gerbe on a space can be seen as the obstruction to patching together a family of -gerbes. Thus, for instance, a 0-gerbe is a -bundle, which is to say a complex line bundle. As described above, a 1-gerbe can be understood as describing the obstacle to patching together a bunch of line bundles, and the obstacle is the ability to find a cocycle satisfying the requisite conditions. This obstacle is measured by the cohomology of the space. Saying we want to patch together -gerbes on the fibre. He went on to discuss how this manifests in terms of obstructions to string structures on manifolds (already discussed at some length in the post on Hisham Sati's school talk, so I won't duplicate here).
A talk by Igor Bakovic, "Stacks, Gerbes and Etale Groupoids", gave a way of looking at gerbes via stacks (see this for instance). The organizing principle is the classification of bundles by the space maps into a classifying space – or, to get the category of principal -bundles on, the category , where is the category of sheaves on and is the classifying topos of -sets. (So we have geometric morphisms between the toposes as the objects.) Now, to get further into this, we use that is equivalent to the category of Étale spaces over – this is a refinement of the equivalence between bundles and presheaves. Taking stalks of a presheaf gives a bundle, and taking sections of a bundle gives a presheaf – and these operations are adjoint.
The issue at hand is how to categorify this framework to talk about 2-bundles, and the answer is there's a 2-adjunction between the 2-category of such things, and , the 2-category of fibred categories over . (That is, instead of looking at "sheaves of sets", we look at "sheaves of categories" here.) The adjunction, again, involves talking stalks one way, and taking sections the other way. One hard part of this is getting a nice definition of "stalk" for stacks (i.e. for the "sheaves of categories"), and a good part of the talk focused on explaining how to get a nice tractable definition which is (fibre-wise) equivalent to the more natural one.
Bakovic did a bunch of this work with Branislav Jurco, who was also there, and spoke about "Nonabelian Bundle 2-Gerbes". The paper behind that link has more details, which I've yet to entirely absorb, but the essential point appears to be to extend the description of "bundle gerbes" associated to crossed modules up to 2-crossed modules. Bundles, with a structure-group , are classified by the cohomology with coefficients in ; and whereas "bundle-gerbes" with a structure-crossed-module can likewise be described by cohomology . Notice this is a bit different from the description in terms of higher cohomology for a -gerbe, which can be understood as a bundle-gerbe using the shifted crossed module (when is abelian. The goal here is to generalize this part to nonabelian groups, and also pass up to "bundle 2-gerbes" based on a 2-crossed module, or crossed complex of length 2, as I described previously for Joao Martins' talk. This would be classified in terms of cohomology valued in the 2-crossed module. The point is that one can describe such a thing as a bundle over a fibre product, which (I think – I'm not so clear on this part) deals with the same structure of overlaps as the higher cohomology in the other way of describing things.
Finally, a talk that's a little harder to classify than most, but which I've put here with things somewhat related to string theory, was Alexander Kahle's on "T-Duality and Differential K-Theory", based on work with Alessandro Valentino. This uses the idea of the differential refinement of cohomology theories – in this case, K-theory, which is a generalized cohomology theory, which is to say that K-theory satisfies the Eilenberg-Steenrod axioms (with the dimension axiom relaxed, hence "generalized"). Cohomology theories, including generalized ones, can have differential refinements, which pass from giving topological to geometrical information about a space. So, while K-theory assigns to a space the Grothendieck ring of the category of vector bundles over it, the differential refinement of K-theory does the same with the category of vector bundles with connection. This captures both local and global structures, which turns out to be necessary to describe fields in string theory – specifically, Ramond-Ramond fields. The point of this talk was to describe what happens to these fields under T-duality. This is a kind of duality in string theory between a theory with large strings and small strings. The talk describes how this works, where we have a manifold with fibres at each point with fibres strings of radius and with radius . There's a correspondence space , which has projection maps down into the two situations. Fields, being forms on such a fibration, can be "transferred" through this correspondence space by a "pull-back and push-forward" (with, in the middle, a wedge with a form that mixes the two directions, ). But to be physically the right kind of field, these "forms" actually need to be representing cohomology classes in the differential refinement of K-theory.
Quantum Gravity etc.
Now, part of the point of this workshop was to try to build, or anyway maintain, some bridges between the kind of work in geometry and topology which I've been describing and the world of physics. There are some particular versions of physical theories where these ideas have come up. I've already touched on string theory along the way (there weren't many talks about it from a physicist's point of view), so this will mostly be about a different sort of approach.
Benjamin Bahr gave a talk outlining this approach for our mathematician-heavy audience, with his talk on "Spin Foam Operators" (see also for instance this paper). The point is that one approach to quantum gravity has a theory whose "kinematics" (the description of the state of a system at a given time) is described by "spin networks" (based on gauge theory), as described back in the pre-school post. These span a Hilbert space, so the "dynamical" issue of such models is how to get operators between Hilbert spaces from "foams" that interpolate between such networks – that is, what kind of extra data they might need, and how to assign amplitudes to faces and edges etc. to define an operator, which (assuming a "local" theory where distant parts of the foam affect the result independently) will be of the form:
where is a particular complex (foam), is a way of assigning irreps to faces of the foam, and is the assignment of intertwiners to edges. Later on, one can take a discrete version of a path integral by summing over all these . Here we have a product over faces and one over vertices, with an amplitude assigned (somehow – this is the issue) to faces. The trace is over all the representation spaces assigned to the edges that are incident to a vertex (this is essentially the only consistent way to assign an amplitude to a vertex). If we also consider spacetimes with boundary, we need some amplitudes at the boundary edges, as well. A big part of the work with such models is finding such amplitudes that meet some nice conditions.
Some of these conditions are inherently necessary – to ensure the theory is invariant under gauge transformations, or (formally) changing orientations of faces. Others are considered optional, though to me "functoriality" (that the way of deriving operators respects the gluing-together of foams) seems unavoidable – it imposes that the boundary amplitudes have to be found from the in one specific way. Some other nice conditions might be: that depends only on the topology of (which demands that the operators be projections); that is invariant under subdivision of the foam (which implies the amplitudes have to be ).
Assuming all these means the only choice is exactly which sub-projection is of the projection onto the gauge-invariant part of the representation space for the faces attached to edge . The rest of the talk discussed this, including some examples (models for BF-theory, the Barrett-Crane model and the more recent EPRL/FK model), and finished up by discussing issues about getting a nice continuum limit by way of "coarse graining".
On a related subject, Bianca Dittrich spoke about "Dynamics and Diffeomorphism Symmetry in Discrete Quantum Gravity", which explained the nature of some of the hard problems with this sort of discrete model of quantum gravity. She began by asking what sort of models (i.e. which choices of amplitudes) in such discrete models would actually produce a nice continuum theory – since gravity, classically, is described in terms of spacetimes which are continua, and the quantum theory must look like this in some approximation. The point is to think of these as "coarse-graining" of a very fine (perfect, in the limit) approximation to the continuum by a triangulation with a very short length-scale for the edges. Coarse graining means discarding some of the edges to get a coarser approximation (perhaps repeatedly). If the happens to be triangulation-independent, then coarse graining makes no difference to the result, nor does the converse process of refining the triangulation. So one question is: if we expect the continuum limit to be diffeomorphism invariant (as is General Relativity), what does this say at the discrete level? The relation between diffeomorphism invariance and triangulation invariance has been described by Hendryk Pfeiffer, and in the reverse direction by Dittrich et al.
Actually constructing the dynamics for a system like this in a nice way ("canonical dynamics with anomaly-free constraints") is still a big problem, which Bianca suggested might be approached by this coarse-graining idea. Now, if a theory is topological (here we get the link to TQFT), such as electromagnetism in 2D, or (linearized) gravity in 3D, coarse graining doesn't change much. But otherwise, changing the length scale means changing the action for the continuum limit of the theory. This is related to renormalization: one starts with a "naive" guess at a theory, then refines it (in this case, by the coarse-graining process), which changes the action for the theory, until arriving at (or approximating to) a fixed point. Bianca showed an example, which produces a really huge, horrible action full of very complicated terms, which seems rather dissatisfying. What's more, she pointed out that, unless the theory is topological, this always produces an action which is non-local – unlike the "naive" discrete theory. That is, the action can't be described in terms of a bunch of non-interacting contributions from the field at individual points – instead, it's some function which couples the field values at distant points (albeit in a way that falls off exponentially as the points get further apart).
In a more specific talk, Aleksandr Mikovic discussed "Finiteness and Semiclassical Limit of EPRL-FK Spin Foam Models", looking at a particular example of such models which is the (relatively) new-and-improved candidate for quantum gravity mentioned above. This was a somewhat technical talk, which I didn't entirely follow, but roughly, the way he went at this was through the techniques of perturbative QFT. That is, by looking at the theory in terms of an "effective action", instead of some path integral over histories with action – which looks like . Starting with some classical history – a stationary point of the action – the effective action is an integral over small fluctuations around it of .
He commented more on the distinction between the question of triangulation independence (which is crucial for using spin foams to give invariants of manifolds) and the question of whether the theory gives a good quantum theory of gravity – that's the "semiclassical limit" part. (In light of the above, this seems to amount to asking if "diffeomorphism invariance" really extends through to the full theory, or is only approximately true, in the limiting case). Then the "finiteness" part has to do with the question of getting decent asymptotic behaviour for some of those weights mentioned above so as to give a nice effective action (if not necessarily triangulation independence). So, for instance, in the Ponzano-Regge model (which gives a nice invariant for manifolds), the vertex amplitudes are found by the 6j-symbols of representations. The asymptotics of the 6j symbols then becomes an issue – Alekandr noted that to get a theory with a nice effective action, those 6j-symbols need to be scaled by a certain factor. This breaks triangulation independence (hence means we don't have a good manifold invariant), but gives a physically nicer theory. In the case of 3D gravity, this is not what we want, but as he said, there isn't a good a-priori reason to think it can't give a good theory of 4D gravity.
Now, making a connection between these sorts of models and higher gauge theory, Aristide Baratin spoke about "2-Group Representations for State Sum Models". This is a project Baez, Freidel, and Wise, building on work by Crane and Sheppard (see my previous post, where Derek described the geometry of the representation theory for some 2-groups). The idea is to construct state-sum models where, at the kinematical level, edges are labelled by 2-group representations, faces by intertwiners, and tetrahedra by 2-intertwiners. (This assumes the foam is a triangulation – there's a certain amount of back-and-forth in this area between this, and the Poincaré dual picture where we have 4-valent vertices). He discussed this in a couple of related cases – the Euclidean and Poincaré 2-groups, which are described by crossed modules with base groups or respectively, acting on the abelian group (of automorphisms of the identity) in the obvious way. Then the analogy of the 6j symbols above, which are assigned to tetrahedra (or dually, vertices in a foam interpolating two kinematical states), are now 10j symbols assigned to 4-simplexes (or dually, vertices in the foam).
One nice thing about this setup is that there's a good geometric interpretation of the kinematics – irreducible representations of these 2-groups pick out orbits of the action of the relevant on . These are "mass shells" – radii of spheres in the Euclidean case, or proper length/time values that pick out hyperboloids in the Lorentzian case of . Assigning these to edges has an obvious geometric meaning (as a proper length of the edge), which thus has a continuous spectrum. The areas and volumes interpreting the intertwiners and 2-intertwiners start to exhibit more of the discreteness you see in the usual formulation with representations of the groups themselves. Finally, Aristide pointed out that this model originally arose not from an attempt to make a quantum gravity model, but from looking at Feynman diagrams in flat space (a sort of "quantum flat space" model), which is suggestively interesting, if not really conclusively proving anything.
Finally, Laurent Freidel gave a talk, "Classical Geometry of Spin Network States" which was a way of challenging the idea that these states are exclusively about "quantum geometries", and tried to give an account of how to interpret them as discrete, but classical. That is, the quantization of the classical phase space (the cotangent bundle of connections-mod-gauge) involves first a discretization to a spin-network phase space , and then a quantization to get a Hilbert space , and the hard part is the first step. The point is to see what the classical phase space is, and he describes it as a (symplectic) quotient , which starts by assigning $T^*(SU(2))$ to each edge, then reduced by gauge transformations. The puzzle is to interpret the states as geometries with some discrete aspect.
The answer is that one thinks of edges as describing (dual) faces, and vertices as describing some polytopes. For each , there's a -dimensional "shape space" of convex polytopes with -faces and a given fixed area . This has a canonical symplectic structure, where lengths and interior angles at an edge are the canonically conjugate variables. Then the whole phase space describes ways of building geometries by gluing these things (associated to vertices) together at the corresponding faces whenever the two vertices are joined by an edge. Notice this is a bit strange, since there's no particular reason the faces being glued will have the same shape: just the same area. An area-1 pentagon and an area-1 square associated to the same edge could be glued just fine. Then the classical geometry for one of these configurations is build of a bunch of flat polyhedra (i.e. with a flat metric and connection on them). Measuring distance across a face in this geometry is a little strange. Given two points inside adjacent cells, you measure orthogonal distance to the matched faces, and add in the distance between the points you arrive at (orthogonally) – assuming you glued the faces at the centre. This is a rather ugly-seeming geometry, but it's symplectically isomorphic to the phase space of spin network states – so it's these classical geometries that spin-foam QG is a quantization of. Maybe the ugliness should count against this model of quantum gravity – or maybe my aesthetic sense just needs work.
(Laurent also gave another talk, which was originally scheduled as one of the school talks, but ended up being a very interesting exposition of the principle of "Relativity of Localization", which is hard to shoehorn into the themes I've used here, and was anyway interesting enough that I'll devote a separate post to it.)
Climbing Mount Bourbaki
Gyre & Gimble
Konrad Voelkel's Blog
Noncommutative Geometry
Reperiendi
Secret Blogging Seminar
The n-Category Café
The Unapologetic Mathematician
U Duality
Archives Select Month January 2018 (1) November 2017 (1) September 2017 (2) August 2014 (3) February 2014 (2) January 2014 (1) August 2013 (1) July 2013 (2) April 2013 (1) January 2013 (1) October 2012 (1) September 2012 (1) July 2012 (1) March 2012 (1) January 2012 (1) November 2011 (1) September 2011 (1) July 2011 (1) June 2011 (1) May 2011 (1) April 2011 (1) March 2011 (5) February 2011 (2) January 2011 (1) December 2010 (1) November 2010 (3) October 2010 (3) August 2010 (1) June 2010 (1) May 2010 (2) April 2010 (3) March 2010 (2) February 2010 (1) January 2010 (1) December 2009 (1) October 2009 (1) September 2009 (2) August 2009 (1) July 2009 (2) June 2009 (3) May 2009 (2) April 2009 (1) March 2009 (1) January 2009 (2) December 2008 (1) November 2008 (1) October 2008 (3) September 2008 (3) August 2008 (2) July 2008 (1) June 2008 (2) May 2008 (2) April 2008 (2) March 2008 (2) February 2008 (3) January 2008 (2) December 2007 (1) November 2007 (5) October 2007 (5) September 2007 (7)
2-groups 2-Hilbert Spaces algebra c*-algebras categorification category theory cohomology conferences gauge theory geometry groupoids higher dimensional algebra higher gauge theory homotopy theory meta moduli spaces musing noncommutative geometry philosophical physics quantization quantum mechanics reading representation theory sheaves spans stacks talks tqft Uncategorized
|
CommonCrawl
|
> cond-mat > arXiv:1711.01104
Condensed Matter > Strongly Correlated Electrons
arXiv:1711.01104 (cond-mat)
[Submitted on 3 Nov 2017 (v1), last revised 3 Jan 2018 (this version, v2)]
Title:Error estimates for extrapolations with matrix-product states
Authors:C. Hubig, J. Haegeman, U. Schollwöck
Abstract: We introduce a new error measure for matrix-product states without requiring the relatively costly two-site density matrix renormalization group (2DMRG). This error measure is based on an approximation of the full variance $\langle \psi | ( \hat H - E )^2 |\psi \rangle$. When applied to a series of matrix-product states at different bond dimensions obtained from a single-site density matrix renormalization group (1DMRG) calculation, it allows for the extrapolation of observables towards the zero-error case representing the exact ground state of the system. The calculation of the error measure is split into a sequential part of cost equivalent to two calculations of $\langle \psi | \hat H | \psi \rangle$ and a trivially parallelized part scaling like a single operator application in 2DMRG. The reliability of the new error measure is demonstrated at four examples: the $L=30, S=\frac{1}{2}$ Heisenberg chain, the $L=50$ Hubbard chain, an electronic model with long-range Coulomb-like interactions and the Hubbard model on a cylinder of size $10 \times 4$. Extrapolation in the new error measure is shown to be on-par with extrapolation in the 2DMRG truncation error or the full variance $\langle \psi | ( \hat H - E )^2 |\psi \rangle$ at a fraction of the computational effort.
Comments: 10 pages, 11 figures
Subjects: Strongly Correlated Electrons (cond-mat.str-el); Statistical Mechanics (cond-mat.stat-mech)
Cite as: arXiv:1711.01104 [cond-mat.str-el]
(or arXiv:1711.01104v2 [cond-mat.str-el] for this version)
Journal reference: Phys. Rev. B 97, 045125 (2018)
Related DOI: https://doi.org/10.1103/PhysRevB.97.045125
From: Claudius Hubig [view email]
[v1] Fri, 3 Nov 2017 10:58:37 UTC (924 KB)
[v2] Wed, 3 Jan 2018 12:37:45 UTC (811 KB)
cond-mat.str-el
cond-mat
cond-mat.stat-mech
IArxiv recommender toggle
IArxiv Recommender (What is IArxiv?)
|
CommonCrawl
|
Cognitive Template-Clustering Improved LineMod for Efficient Multi-object Pose Estimation
Tielin Zhang ORCID: orcid.org/0000-0002-5111-98911,
Yi Zeng1,2,3,4 &
Yuxuan Zhao1
Cognitive Computation volume 12, pages 834–843 (2020)Cite this article
Various types of theoretical algorithms have been proposed for 6D pose estimation, e.g., the point pair method, template matching method, Hough forest method, and deep learning method. However, they are still far from the performance of our natural biological systems, which can undertake 6D pose estimation of multi-objects efficiently, especially with severe occlusion. With the inspiration of the Müller-Lyer illusion in the biological visual system, in this paper, we propose a cognitive template-clustering improved LineMod (CT-LineMod) model. The model uses a 7D cognitive feature vector to replace standard 3D spatial points in the clustering procedure of Patch-LineMod, in which the cognitive distance of different 3D spatial points will be further influenced by the additional 4D information related with direction and magnitude of features in the Müller-Lyer illusion. The 7D vector will be dimensionally reduced into the 3D vector by the gradient-descent method, and then further clustered by K-means to aggregately match templates and automatically eliminate superfluous clusters, which makes the template matching possible on both holistic and part-based scales. The model has been verified on the standard Doumanoglou dataset and demonstrates a state-of-the-art performance, which shows the accuracy and efficiency of the proposed model on cognitive feature distance measurement and template selection on multiple pose estimation under severe occlusion. The powerful feature representation in the biological visual system also includes characteristics of the Müller-Lyer illusion, which, to some extent, will provide guidance towards a biologically plausible algorithm for efficient 6D pose estimation under severe occlusion.
The evolutionary procedure of the mammalian brain has resolved the problem of 6D pose estimation by integrating different related brain regions, hundreds of specifically designed neuron types, and functional microcircuits. However, a challenge remains in discovering the mysterious black box of the brain and designing the most efficient biologically plausible model for 6D pose estimation.
With significant development of computer science and cognitive robot theory, various types of theoretical algorithms have been proposed [1] in the research area of 6D pose estimation, e.g., the point pair method, template matching method, Hough forest method, and deep learning method. However, these efforts in machine learning and robotics are still a considerable distance from the performance of the natural biological system. They still face fundamental problems such as sensitivity to illumination changes, noise, blur, and occlusion.
The Müller-Lyer illusion is a special kind of functional phenomenon in the procedure of visual information processing of the brain [2], in which the cognitive distance of different 3D spatial points will be further affected by additional features related to direction and magnitude. Figure 1 shows the three basic types of Müller-Lyer illusion, e.g., the distance illusion in Fig. 1a, direction illusion in Fig. 1b, and angle illusion in Fig. 1c, respectively. These illusions in the visual system contribute to feature detection and object identification during 6D pose estimation, especially in cases of severe occlusion.
a–c The Müller-Lyer illusion in the biovision system
In order to estimate 6D pose in the case of multiple objects with occlusion and motion blur, as well as decrease the computation cost as much as possible, we firstly focused on the standard Patch-LineMod method for its accuracy and efficiency. Patch-LineMod is an improved version of the LineMod method, which separates points into different patches by the K-means method and makes possible multi-object 6D pose estimation.
Inspired by the Müller-Lyer illusion, a cognitive template-clustering improved LineMod (CT-LineMod) model was proposed. The model uses a 7D cognitive feature vector to replace standard 3D spatial points in the clustering procedure of Patch-LineMod, in which the cognitive distance of different 3D spatial points is further affected by additional 4D features related to direction and magnitude in the Müller-Lyer illusion. The 7D vector is dimensionally reduced into the 3D vector by gradient-descent method, and then further clustered by K-means method to aggregately match templates and automatically eliminate superfluous clusters, which makes template matching possible on both holistic and part-based scales.
The CT-LineMod model can be considered as an integration of powerful 7D feature representation from Patch-LineMod and efficient cognitive template-clustering based on the Müller-Lyer illusion. The model was verified on the standard Doumanoglou dataset and performed perfectly, demonstrating the accuracy and efficiency of the proposed model on cognitive feature distance measurement and template selection on multiple pose estimation with severe occlusion.
Four main types of models have been proposed in the research area of 6D pose estimation, including the point pair method, template matching method, Hough forest method, and deep learning method.
For point pair–based algorithms, Drost et al. proposed a point pair feature algorithm, which successfully integrates global description, local point pair features, and local-global matching conversion for better recognition performance in the case of noise, clutter, and partial occlusions [3]. However, it performs poorly in detecting objects with similar background clutter and ignores valuable edge information of objects. Hinterstoisser proposed a new sampling and voting point pair scheme [4] to reduce the harmful effects of clutter and sensor noise. However, this method is extremely sensitive to occlusion and cannot recover the complete object location information. A cognitively inspired 6D motion estimation method is proposed based on solving the Perspective-n-Point problem and the Kalman filter [5].
For template matching–based algorithms, Hinterstoisser et al. proposed the LineMod algorithm, which utilizes the gradient information and normal features of the surface of an object for template matching [6, 7]. However, LineMod shows poor performances on real-time template matching. One possible reason for this is it only focuses on the strong edges during feature extraction. Hodan et al. proposed BOP [8] as a novel and standard benchmark to use with current datasets. However, it focuses more on different single objects rather than multiple overlapping objects in a single scene.
For Hough forest–based algorithms, Gall et al. proposed a target detection algorithm [9], which constructs a random forest to extract image blocks, and then makes a template judgment within each decision tree, and votes in the Hough space. Tejani et al. proposed the latent-class Hough forest model, which integrates a new template-based segmentation function into the regression forest [10]. However, this is limited by the manually designed features for different objects in different overlapped scenes.
For deep neural network (DNN)–based algorithms, Kehl et al. proposed a single-shot multi-box detection algorithm, which uses the DNN-based model for depth learning of 2D images and then uses the projection properties to analyze the inferred points and in-plane rotation scores [11]. A similar convolutional neural network (CNN) based on category detectors has also been used [12]. A robust 3D object detection and pose estimation pipeline based on RGB-D images has been constructed, which can detect multiple objects simultaneously while reducing false positives [13]. For heavy clutter scenes and occlusion problems, Bonde et al. proposed a highly robust real-time object recognition framework, which uses an iterative training scheme to classify the position and posture of 3D objects [14]. Xiang et al. proposed a new pose-CNN for 6D pose estimation by introducing a new loss function named ShapeMatch-Loss [15]. Feifei et al. proposed heterogeneous DenseFusion architecture based on PoseCNN, which uses an end-to-end iterative gesture fine-tuning program and performs excellently on both YCB-Video and LineMod datasets [16]. Euclidean distance, scalable nearest neighbor search method, and CNN are integrated as an efficient model to capture both the object identity and 3D pose [17, 18]. Park et al. proposed a novel architecture Pix2Pose based on CNN [19], which predicted the coordinates of each pixel after feature extraction, and then calculated the position and orientation by voting. Although this effort largely improved the robustness of pose estimation, especially under heavy occlusion, the computation cost was relatively expensive considering a comparable accuracy can be achieved with other methods.
Besides 6D pose estimation, many clustering methods have also been proposed for intelligent patch identification, especially in cases involving severe occlusion. Nazari et al. proposed a clustering ensemble measurement framework, which is based on the cluster-level weighting, can assign weights into each cluster with different reliability, and has high robustness, clustering quality, and time complexity [20]. Rashidi et al. proposed a clustering ensemble framework, which is based on the integration of undependability concepts and cluster weighting, and also uses hierarchical agglomerative clustering and bi-partite graph formulation to estimate the cluster dependability and certainty [21]. Some semi-supervised clustering methods have also been proposed for better information representation [22].
Unfortunately most of the models mentioned above are not as efficient and robust as the human brain. Further inspiration from the biological system is necessary for human-comparable algorithm on efficient 6D pose estimation.
Standard LineMod and Patch-LineMod
LineMod is a template-based approach for 6D pose estimation [6]. It can handle untextured objects under massive clutter by taking a short training time. However, this template-based effort will inevitably fail on the identification of "multiple" objects in complex scenes. The number of features in this method cannot be greater than a predesigned parameter, for example, 64, which dramatically limits the higher performance of the algorithm. Besides, the recognition rate of the LineMod algorithm will drop rapidly under occlusion. The training procedure of LineMod is shown in Fig. 2a.
a–e The comparisons between LineMod, Patch-LineMod, and CT-LineMod methods
Different with LineMod which makes the pose estimation mostly based on the similarity of target object and whole template, the Patch-LineMod [8] separates the whole template into different small templates (e.g., patch templates) by K-means clustering, and then makes the similarity between training patches and target patches for the identification of 6D pose with some parts of object under occlusion. The training procedure of the Patch-LineMod is shown in Fig. 2b. Both the LineMod and Patch-LineMod algorithms contain training and testing phases:
The training phase is shown in Fig. 2a and b: LineMod and Patch-LineMod load the RGB-D images of a specific object from different training directions and preprocess it with Gaussian blur and Sobel operator. Then both of them calculate the gradient direction and magnitude above the predefined threshold as the 7D cognitive feature. Finally, the template matching is used for the learning procedure of pose identification.
The test phase is shown in Fig. 2d: A predefined sliding window is used for the matching process, which is from both horizontal and vertical directions, respectively. The similarity of 7D cognitive features between trained objects (or patches with K-means in Patch-LineMod) and target objects is calculated, and then the trained object with the biggest similarity will be selected as the estimated pose.
Cognitive Template-Clustering Improved LineMod
Comparisons between the different processing steps of LineMod, Patch-LineMod, and CT-LineMod are shown in Fig. 2. All three methods contain training and test phases.
For the standard LineMod method, it will determine the similarity measurement of these features directly with the target object by template matching. However, this matching cannot detect an object when it is affected by an occlusion. For example, by using LineMod method in Fig. 2a, only one object (and also one pose) is identified in the test phase in Fig. 2d.
For Patch-LineMod method in Fig. 2b, the simple K-means clustering method is used based on purely spatial locations (e.g., the X, Y, and Z positions of points), which will generate different patches for different modalities, hence will detect more objects in the test phase, for example, two objects are detected in Fig. 2d.
The Müller-Lyer illusion can provide a new 3D cognitive feature vector to replace the original 3D spatial location. The cognitive feature vector is the dimensional reduction from the 7D cognitive feature vector. The distance between two points (e.g., the 3D spatial information) will be affected by their neighborhood features (e.g., the direction and magnitude of the gradient feature in another 4D vector), as shown in Fig. 2e. This illusion will change the feature characteristics of objects; for example, the features near the edges or borders will contribute more to the feature identification. Hence, after the cognitive template-clustering in Fig. 2c, the CT-LineMod will successfully detect all of the three objects in Fig. 2d.
The 7D Cognitive Feature Vector
The 7D cognitive feature vector contains the 3D spatial feature vector (with the X, Y, and Z spatial locations), and also another 4D vector for gradient direction, gradient magnitude, surface-normal direction, and surface-normal magnitude, as shown in Fig. 3.
The LineMod-type methods contain both gradient modalities and surface normal modalities. For good template matching algorithms, the templates after selection should be robust to scale change, color variance, and severe occlusion. The 7D cognitive feature vector has the potential of integrating different types of cognitive features. For example, the distances between different features will all contribute to the patch allocation. The intrinsic links between these feature points will further help the next-step multiple 6D pose estimation.
Cognitive Template-Clustering
The cognitive template-clustering is the improvement of patch clustering in Patch-LineMod with Müller-Lyer illusion, as shown in Fig. 2e. The templates are aggregated based on the similarity matching of all 7D cognitive feature vectors, e.g., locations (with three dimensions), magnitudes (with two dimensions), and directions (with two dimensions).
The overall procedure of CT-LineMod is shown in Fig. 4. After loading of the raw images in Fig. 4a, e.g., the 3D cloud points, the four additional features are calculated for each 3D points in Fig. 4b. Features in the neighborhood area with size s × s are integrated as the templates in Fig. 4c. Then the patches of templates are allocated based on the K-means clustering method with updated spatial 3D vectors after Müller-Lyer illusion, as shown in Fig. 4d. Finally, the templates from input images will be matched with templates from training images in Fig. 4e, and the matched templates will contain the estimated 6D pose.
a–e The architecture of hierarchical information processing in CT-LineMod
Information in Feature Level
The information in feature level is shown in Fig. 4b and Eq. (1). \({F_{p}^{I}}\) and \({F_{p}^{M}}\) are the features calculated from input image I and trained image M respectively in the position of point p.
$$ \left\{\begin{array}{ll} {F_{p}^{I}} = \left \{ {p^{I}_{X}},{p^{I}_{Y}},{p^{I}_{Z}},p^{I}_{\text{gd}},p^{I}_{\text{gm}},p^{I}_{\text{sd}},p^{I}_{\text{sm}} \right \} \\ {F_{p}^{M}} = \left \{ {p^{M}_{X}},{p^{M}_{Y}},{p^{M}_{Z}},p^{M}_{\text{gd}},p^{M}_{\text{gm}},p^{M}_{\text{sd}},p^{M}_{\text{sm}} \right \} \end{array}\right. $$
pI is the raw pixel in the input 3D RGB-D image I, pM is the raw pixel in the trained image M. \({p^{I}_{X}}\), \({p^{I}_{Y}}\), and \({p^{I}_{Z}}\) are the spatial locations of point p in X, Y, and Z axes in image I. \(p^{I}_{\text {gd}}\), \(p^{I}_{\text {gm}}\), \(p^{I}_{\text {sd}}\), and \(p^{I}_{\text {sm}}\) represent features from gradient direction, gradient magnitude, surface-normal direction, and surface-normal magnitude respectively.
Information in Template Level
The template-level information is shown in Fig. 4c and Eq. (2), in which s is the calculation step in template Tp, and p is the center point template T with an area of s × s.
$$ \left\{\begin{array}{ll} {T_{p}^{I}} = \frac{1}{s^{2}}{\sum}_{p\in P}^{s\times s}{F_{p}^{I}}(\mathrm{gd,gm,sd,sm}) \\ {T_{p}^{M}} = \frac{1}{s^{2}}{\sum}_{p\in P}^{s\times s}{F_{p}^{M}}(\mathrm{gd,gm,sd,sm}) \end{array}\right. $$
Müller-Lyer Illusion from 7D to 3D Feature Space
The Müller-Lyer illusion can be considered as the dimension reduction of the information in feature level from the 7D information to the 3D information. Equation (3) shows the Müller-Lyer illusion function IllML, in which the input of the function is the 7D feature in Eq. (1) and the output will be the updated 3D pixel-level information from pI to \(\hat {p}^{I}\).
$$ \hat{p}^{I}_{X},\hat{p}^{I}_{Y},\hat{p}^{I}_{Z} = Ill_{\text{ML}}({p^{I}_{X}},{p^{I}_{Y}},{p^{I}_{Z}},p^{I}_{\text{gd}},p^{I}_{\text{gm}},p^{I}_{\text{sd}},p^{I}_{\text{sm}}) $$
For the easier understanding, here we use \(x_{i}^{7d}\) to represent the \({F_{p}^{I}}\) for the calculation of IllML. As shown in Eq. (4), pij shows the conditional probability of two features between feature i and feature j in original 7D space. The feature vectors of \(x_{i}^{7d}\) and \(x_{j}^{7d}\) are with seven dimensions, and the Gaussian distribution centers on \(x_{i}^{7d}\) and \(x_{j}^{7d}\) are calculated. The σi is the variance, and n is the number of candidate points.
$$ \left\{\begin{array}{ll} p_{j/i}=\frac{\exp(-||x_{i}^{7d}-x_{j}^{7d}||^{2}/2{\sigma_{i}^{2}} )}{{\sum}_{k\neq i}\exp(-||x_{i}^{7d}-x_{k}^{7d}||^{2}/2{\sigma_{i}^{2}} )} \\ p_{ij}=\frac{p_{j/i}+p_{i/j}}{2n} \end{array}\right. $$
Here we map the vector from 7D to 3D to represent the Müller-Lyer illusion, in which the original spatial 3D with \({p^{I}_{X}}\), \({p^{I}_{Y}}\), and \({p^{I}_{Z}}\), will be affected by \(p^{I}_{\text {gd}}\), \(p^{I}_{\text {gm}}\), \(p^{I}_{\text {sd}}\), and \(p^{I}_{\text {sm}}\), and during the mapping, we also need to reflect the similarity between high-dimensional 7D and low-dimensional 3D data points in the form of conditional probability qij in 3D space.
$$ q_{ij}=\frac{\left( 1+||y_{i}^{3d}-y_{j}^{3d}||^{2} \right )^{-1}}{{\sum}_{k \neq l}\left( 1+||y_{k}^{3d}-y_{l}^{3d}||^{2} \right )^{-1}} $$
We then calculate all of the conditional probability pij in 7D space and qij in 3D space. Every two pairs are calculated to measure the minimal Kullback-Leibler divergence.
$$ C=\text{KL}(p_{ij}||q_{ij})=\sum\limits_{i}\sum\limits_{j} p_{ij}\log\frac{p_{ij}}{q_{ij}} $$
Then, the stochastic gradient descent method is used for the information mapping from 7D to 3D space.
$$ \frac{\partial C}{\partial y_{i}^{3d}}=4\sum\limits_{j} \frac{\left( p_{ij}-q_{ij} \right )\left( y_{i}^{3d}-y_{j}^{3d} \right )}{1+\left \| y_{i}^{3d}-y_{j}^{3d} \right \|^{2}} $$
After iteratively learning of pij and qij, finally, we will get \(\hat {p}^{I}_{X}\), \(\hat {p}^{I}_{Y}\), and \(\hat {p}^{I}_{Z}\) from the mapping of \({p^{I}_{X}}\), \({p^{I}_{Y}}\), and \({p^{I}_{Z}}\) with Müller-Lyer illusion.
The function of IllML() is similar with the traditional linear dimension-reduction algorithm PCA (the abbreviation of "principal component analysis") or t-SNE (the abbreviation of "t-distributed stochastic neighbor embedding") method [23], which is a non-linear dimension-reduction algorithm for mining high-dimensional data space into a lower-dimensional data space. The IllML() can explain complex polynomial relations between features and perform well when they focus on dissimilar data points in lower-dimensional regions.
Information in Patch Level
The patch-level information is represented as Pi. For the standard Patch-LineMode method, the patch clustering will be based purely on the spatial information \({p^{I}_{X}}\), \({p^{I}_{Y}}\), and \({p^{I}_{Z}}\), as shown in Eq. (8).
$$ \left\{\begin{array}{l} P_{i}=K\text{means}({p^{I}_{X}},{p^{I}_{Y}},{p^{I}_{Z}}), p\in I,M \end{array}\right. $$
With the help of Müller-Lyer illusion, the new cognitive feature points \(\hat {p}^{I}_{X}\), \(\hat {p}^{I}_{Y}\), and \(\hat {p}^{I}_{Z}\) will replace \({p^{I}_{X}}\), \({p^{I}_{Y}}\), and \({p^{I}_{Z}}\), as shown in Eq. (9).
$$ \left\{\begin{array}{ll} P_{i}^{\text{CT}}=K\text{means}(\hat{p}^{I}_{X},\hat{p}^{I}_{Y},\hat{p}^{I}_{Z}) & \text{if} (\hat{p}\in I,M) \\ P_{i}^{\text{CT}}=K\text{means}({p^{I}_{X}},{p^{I}_{Y}},{p^{I}_{Z}}) & \text{if} (\hat{p}\notin I,M) \end{array}\right. $$
The \(P_{i}^{\text {CT}}\) represents the proposed cognitive template-clustering in the procedure of patch generation with K-means methods.
Similarity Measurement for Pose Estimation
As shown in Eq. (10), the pose estimation is looking for the max similarity template features from image I, which are most closely with the template features in dataset image M, with the condition of the patch search area identified by the K-means method. The function Sim() is the cosine similarity, which measures the angle of the two input vectors.
$$ \text{pose} = \arg\max_{P_{i}^{\text{CT}}, i\in I,M}\sum\limits_{p}^{p\in P_{i}^{\text{CT},I}, P_{i}^{\text{CT},M}}\text{Sim}({T_{p}^{I}}, {T_{p}^{M}}) $$
ICP Post Processing
We use the iterative closest point (ICP) algorithm to remove the duplicate poses based on the evaluation scores, which is the non-maximum suppression algorithm for pose calculation, pose correction, and pose verification [24].
The Training and Test Procedure of CT-LineMod
The two phases of training and test procedures for CT-LineMod-based 6D pose estimation are shown in Algorithm 1.
Experimental Results
RGB-D Doumanoglou Dataset
The 3D point cloud cup and box datasets are the main parts of the Doumanoglou dataset [25], as shown in Fig. 5. It contains the training set and test set, in which the training set contains 4740 rendered images, and the test set contains 177 images. The range of object distances is from 455 to 1076 mm, the azimuth range is from 0 to 360∘, and the elevation range is from − 58 to 88∘.
The RGB-D images of cup and box in Doumanoglou dataset and the feature maps during training procedure (a1, a2). The RGB-D raw images for cup and box (b1, b2). The masks of objects (c1, c2). 3D spatial positions in Patch-LineMod (d1, d2). Clustering results in Patch-LineMod (e1, e2). 3D spatial positions in CT-LineMod (f1, f2). Clustering results in CT-LineMod
Figure 5a1 shows the raw images of a mini coffee cup in 360∘ at different observation angles. The clustered feature maps during the training procedure include the RGB-D image, mask image in Fig. 5b1, and initial feature map in Fig. 5c1 and e1. Figure 5d1 shows the clustered feature map based on the Patch-LineMod method, in which the K-means method is used for the 3D spatial location clustering. Figure 5f1 shows the clustering of the Müller-Lyer illusion from 7D to 3D feature space. It is easy to find out that the cognitive template-clustering method will generate better feature point clustering, which could well separate the corners, edges, and plane of the cups. Additionally, it can generate more uniformly distributed feature points compared with the Patch-LineMod, which also contributes to the performance of 6D pose estimation under occlusion.
Similar with the coffee cup, the box dataset is also processed by both the Patch-LineMod and CT-LineMod from Fig. 5a2 to f2. The source code in this paper is forked from Patch-LineMod projectFootnote 1, and then updated into the CT-LineModFootnote 2.
6D Pose Estimation
6D pose estimation is the task of detecting 6D poses of cups and boxes, which contain the locations and orientations of different objects. The awareness of position and orientation of objects in a scene is sometimes referred to as six degrees of freedom pose.
6D pose estimation for cups and boxes in Doumanoglou dataset
After feature calculation and patch generation, we use the segmented template to perform the sliding window matching in the horizontal and vertical directions, respectively, and calculate the similarities in these windows. Figure 6 shows the improved CT-LineMod compared with standard Path-LineMod on identifying the number of targets and the accuracy of pose estimation of coffee cups and boxes.
The first line of Fig. 6 shows the raw RGB-D images and detected feature points after the candidate-feature filtering. This image contains 12 cups, and the AprilTag is used for the spatial calibration [26].
The second line of Fig. 6 shows the detected 6D pose with the top-1 target, which shows the one with the highest confidence in all of the candidate object poses. For a different view point, the detected best pose candidate object is different.
The third line of Fig. 6 shows the multiple pose estimation of the images, in which more than one candidate poses are calculated by both the CT-LineMod and also the Patch-LineMod methods. Moreover, for most of the cases, the CT-LineMod will get more pose candidates. This result will also be verified in the compared results in Tables 1 and 2.
The fourth line of Fig. 6 shows raw detected modalities, and the white color represents gradient modalities.
Table 1 The comparison of recall values for three algorithms on three datasets
Table 2 The comparison of F1 scores between our method and other state-of-the-art methods
The Experimental Comparisons
We use the recall and F1 values to show the performance of proposed CT-LindMod on the dataset with severe occlusion, which includes detected-correct, detected-incorrect, and undetected results of 6D pose estimation. Ntop is the top N pose estimates with highest confidence, which is evaluated by two kinds of objects in each scene (i.e., 56 RGB-D pictures for coffee cup, 60 RGB-D pictures for juice box, and 61 RGB-D pictures for both of them, and the total is 177 test samples) in the Doumanoglou-dataset [25].
As shown in Table 1, we select three kinds of scenes, including the coffee cup scene, the juice box scene, and the mixed scene. Moreover, the three methods are compared with the configuration of Ntop maximum, e.g., the standard LineMod, Patch-LineMod, and CT-LineMod methods. From the figure, the proposed CT-LineMod method will largely improve the recall performance compared with other methods. The CT-LineMod can identify most of the targets, except the ones hidden in the bottom or under severe occlusion.
Besides, we have tested our method on full Doumanoglou dataset (with heavy occlusion scenes) in the "Sixd Challenge", and make the further F1 score comparison with other states of the art methods with the configuration of Ntop = 1. The LineMod algorithm [6], PPF (point pair Feature) algorithm [3], Hough forest algorithm [10], Doumanoglou algorithm [25], and our proposed model are verified. The mean is the average of the performance on two datasets. The F1 scores of these methods are calculated, as shown in Table 2, which shows the power of the proposed CT-LineMod model compared with other state-of-the-art methods.
The LineMod method is a kind of template matching method which is more efficient for 6D pose estimation compared with other methods based on point pair, Hough forest methods, and DNNs. However, the standard LineMod method can only detect a single target (i.e., the one with the maximum confidence value) and cannot engate with multiple object occlusion and motion blur. One of the primary motivations of the paper is trying to find an efficient way to make possible multiple 6D pose estimation under occlusion.
Hence, inspired by the Müller-Lyer illusion, a cognitive template-clustering improved LineMod model, i.e., CT-LineMod model, is proposed. The model uses a 7D cognitive feature vector to replace standard 3D spatial points in the clustering procedure of Patch-LineMod, in which the cognitive distance of different 3D spatial points is further influenced by the additional 4D information related to direction and magnitude of features in Müller-Lyer illusion. The Müller-Lyer illusion is a kind of sensation illusion in the visual system, which contributes to a better feature generation, feature representation, and also 6D pose estimation under severe occlusion. Finally, the CT-LineMod method has been verified by its performance of 6D pose estimation on the Doumanoglou dataset, in which the model's accuracy and efficiency were demonstrated.
https://github.com/meiqua/patch_linemod
https://github.com/brain-cog/Brain_inspired_Batch
Luo B, Hussain A, Mahmud M, Tang J. Advances in brain-inspired cognitive systems. Cogn Comput 2016;8(5):795–796.
Seel NM, (ed). 2012. Müller-lyer illusion. Boston: Springer.
Drost B, Ulrich M, Navab N, Ilic S. Model globally match locally: Efficient and robust 3d object recognition. 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition. IEEE; 2010. p. 998–1005.
Hinterstoisser S, Lepetit V, Rajkumar N, Konolige K. Going further with point pair features. European Conference on Computer Vision. Springer; 2016. p. 834–848.
Chen J, Luo X, Liu H, Sun F. Cognitively inspired 6d motion estimation of a noncooperative target using monocular rgb-d images. Cogn Comput 2016;8(1):105–113.
Hinterstoisser S, Cagniart C, Ilic S, Sturm P, Navab N, Fua P, Lepetit V. Gradient response maps for real-time detection of textureless objects. IEEE Trans Pattern Anal Mach Intell 2011;34(5):876–888.
Hinterstoisser S, Lepetit V, Ilic S, Holzer S, Bradski G, Konolige K, Navab N. Model based training, detection and pose estimation of texture-less 3d objects in heavily cluttered scenes. Asian Conference on Computer Vision. Berlin: Springer; 2012. p. 548–562.
Hodan T, Michel F, Brachmann E, Kehl W, GlentBuch A, Kraft D, Drost B, Vidal J, Ihrke S, Zabulis X, et al. Bop: Benchmark for 6d object pose estimation. Proceedings of the European Conference on Computer Vision (ECCV); 2018. p. 19–34.
Gall J, Stoll C, De Aguiar E, Theobalt C, Rosenhahn B, Seidel H-P. Motion capture using joint skeleton tracking and surface estimation. 2009 IEEE Conference on Computer Vision and Pattern Recognition. IEEE; 2009. p. 1746–1753.
Tejani A, Tang D, Kouskouridas R, Kim T-K. Latent-class hough forests for 3d object detection and pose estimation. European Conference on Computer Vision. Springer; 2014. p. 462– 477.
Kehl W, Manhardt F, Tombari F, Ilic S, Navab N. Ssd-6d: Making rgb-based 3d detection and 6d pose estimation great again. Proceedings of the IEEE International Conference on Computer Vision; 2017. p. 1521–1529.
Redmon J, Divvala S, Girshick R, Farhadi A. You only look once: Unified, real-time object detection. Proceedings of the IEEE conference on computer vision and pattern recognition; 2016. p. 779–788.
Kehl W, Milletari F, Tombari F, Ilic S, Navab N. Deep learning of local rgb-d patches for 3d object detection and 6d pose estimation. European Conference on Computer Vision. Springer; 2016. p. 205–220.
Bonde U, Badrinarayanan V, Cipolla R. Robust instance recognition in presence of occlusion and clutter. European Conference on Computer Vision. Springer; 2014. p. 520– 535.
Xiang Y, Schmidt T, Narayanan V, Fox D. 2018. Posecnn: A convolutional neural network for 6d object pose estimation in cluttered scenes. Robotics: Science and Systems (RSS).
Wang C, Xu D, Zhu Yuke, Martín-martín R, Lu C, Fei-Fei L, Savarese S. Densefusion: 6d object pose estimation by iterative dense fusion. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2019. p. 3343– 3352.
Wohlhart P, Lepetit V. Learning descriptors for object recognition and 3d pose estimation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2015. p. 3109–3118.
Tompson JJ, Jain A, LeCun Y, Bregler C. Joint training of a convolutional network and a graphical model for human pose estimation. Advances in Neural Information Processing Systems; 2014. p. 1799–1807.
Park K, Patten T, Vincze M. Pix2pose: Pixel-wise coordinate regression of objects for 6d pose estimation. Proceedings of the IEEE International Conference on Computer Vision; 2019. p. 7668–7677.
Nazari A, Dehghan A, Nejatian S, Rezaie V, Parvin H. A comprehensive study of clustering ensemble weighting based on cluster quality and diversity. Pattern Anal Applic 2019;22(1):133–145.
Rashidi F, Nejatian S, Parvin H, Rezaie V. 2019. Diversity based cluster weighting in cluster ensemble: an information theory approach. Artif Intell Rev, pp 1–28.
Qin Y, Ding S, Wang L, Wang Y. 2019. Research progress on semi-supervised clustering. Cognitive Computation, pp 1–14.
van der Maaten L, Hinton G. Visualizing data using t-sne. J Mach Learn Res 2008;9:2579–2605.
MATH Google Scholar
Besl PJ, McKay ND. Method for registration of 3-d shapes. Sensor fusion IV: Control Paradigms and Data Structures. International Society for Optics and Photonics; 1992. p. 586–606.
Doumanoglou A, Kouskouridas R, Malassiotis S, Kim T-K. Recovering 6d object pose and predicting next-best-view in the crowd. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition; 2016. p. 3583–3592.
Olson E. Apriltag: A robust and flexible visual fiducial system. 2011 IEEE International Conference on Robotics and Automation. IEEE; 2011. p. 3400–3407.
This study is financially supported by the Beijing Natural Science Foundation (No. 4184103), the National Natural Science Foundation of China (No. 61806195), the Strategic Priority Research Program of Chinese Academy of Sciences (Grant No.XDB32070100), the Beijing Municipality of Science and Technology (Grant No. Z181100001518006), the CETC Joint Fund (Grant No. 6141B08010103), and the Beijing Academy of Artificial Intelligence (BAAI).
Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
Tielin Zhang, Yi Zeng & Yuxuan Zhao
Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
Yi Zeng
National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
School of Software and Microelectronics, Peking University, Beijing, China
Tielin Zhang
Yuxuan Zhao
Tielin Zhang, Yang Yang, and Yi Zeng conceived the idea and formulated the problem. Tielin Zhang and Yang Yang performed the simulations and wrote the paper. Tielin Zhang and Yuxuan Zhao analyzed the results.
Correspondence to Tielin Zhang or Yi Zeng.
The authors declare that they have no conflict of interest.
This article does not contain any studies with human participants or animals performed by any of the authors.
Tielin Zhang and Yang Yang contributed equally to this article and should be considered as co-first authors.
Zhang, T., Yang, Y., Zeng, Y. et al. Cognitive Template-Clustering Improved LineMod for Efficient Multi-object Pose Estimation. Cogn Comput 12, 834–843 (2020). https://doi.org/10.1007/s12559-020-09717-5
Issue Date: July 2020
Müller-Lyer illusion
Brain-inspired computation
LineMod
|
CommonCrawl
|
write $\left|\begin{array}{cc} a_1+b_1 & c_1+d_1\\ a_2+b_2 & c_2+d_2 \end{array}\right|$ as a sum of four determinant
Express $\left|\begin{array}{cc} a_1+b_1 & c_1+d_1\\ a_2+b_2 & c_2+d_2 \end{array}\right|$ as a sum of four determinants who's entries contains no sum
My primitive thoughts on this:
I was thinking that I was able to split the determinants but that was impossible because DET(a+b) is not DET(a)+DET(b)
I really had no clue what to do from here
a different attempt was done as I expanded out the determinant to get the following
$a_1c_2+a_1d_2+b_1c_2+b_1d_2-a_2c_1-a_2d_1-b_2c_1-b_2d_1$
I was trying to group the above line so that I was going to create more determinants but that efforts proved to be in vain
and thus I request for some help
linear-algebra matrices algebra-precalculus determinant
asked May 7 '17 at 4:15
John RawlsJohn Rawls
$\begingroup$ Which definition of determinant do you have? The simplest solution to this problem uses the fact that a determinant is a linear function of each of its rows. $\endgroup$ – Semiclassical May 7 '17 at 4:24
$\begingroup$ @Semiclassical im not really familiar with linear algebra but the definition that I'm using is pertaining to matrixes not the calculus one $\endgroup$ – John Rawls May 7 '17 at 4:55
Expanding the determinant and regrouping will get you the correct answer, you just need a little more perseverance! Try taking each positive term of your expansion and grouping it with each negative term of your expansion. Those may be the determinants you want.
Whenever you get stuck at some point like that, clever regrouping or looking at the problem at a different angle is necessary. Take a step back and see which parts may go nicely together. Use your intuition to try to guess the answer, and then back it up with math.
If you are in a class where you are looking at things from a more abstract setting, you may want to use the fact that a determinant is a linear function of its rows, and that for any linear function $T$, $$ T(x + y) = T(x) + T(y) .$$
Bob KruegerBob Krueger
Here's a way to handle the algebra in an organized way. First, we expand the determinant: $$\begin{vmatrix}a_1+b_1 & c_1+d_1\\ a_2+b_2 & c_2+d_2 \end{vmatrix}=(a_1+b_1)(c_2+d_2)-(a_2+b_2)(c_1+d_1)$$ When we multiply out the first term, we'll end up with four different products depending on whether we're distributing the first or second part of each factor. If we group these contributions from each of the two factors together, we get
$$(a_1 c_2-a_2 c_1)+(b_1 c_2-b_2 c_1)+(a_1d_2-a_2 d_1)+(b_1 d_2-b_2 d_1)$$ which we can recognize as the desired sum of determinants
$$ \begin{vmatrix} a_1 & c_1 \\ a_2 & c_2\end{vmatrix} +\begin{vmatrix} b_1 & c_1 \\ b_2 & c_2\end{vmatrix} +\begin{vmatrix} a_1 & d_1 \\ a_2 & d_2\end{vmatrix} +\begin{vmatrix} b_1 & d_1 \\ b_2 & d_2\end{vmatrix}.$$
SemiclassicalSemiclassical
$\begingroup$ The observant reader will note that this approach would have worked just as well had each matrix element had $n$ terms instead of 2. This reflects the fact that the determinant is a multilinear function of its rows. $\endgroup$ – Semiclassical May 8 '17 at 17:01
$a_1c_2+a_1d_2+b_1c_2+b_1d_2-a_2c_1-a_2d_1-b_2c_1-b_2d_1$ - I think you're on the right track here.
Take, for example, $a_1c_2+a_1d_2$. Look at the definition of the determinant for a $2x2$ matrix and see if there's any connection and resemblance between it and $a_1c_2+a_1d_2$. There is one.
UPD: also remember that $a+b$ is $a -(-b)$.
alekscooperalekscooper
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices algebra-precalculus determinant or ask your own question.
$\left( \begin{array}{cc} 1 & 1 \\ 0 & 1 \\ \end{array} \right)$ not diagonalizable
SOR method converges for $\left( \begin{array}{ccc}2& -1\\-2 & 2\end{array} \right)$
Calculate determinant of $ M=\left( \begin{array}{cc} A&-\vec d^T \\ \vec c& b \\ \end{array} \right) $.
Mean value for $\tiny\left( \begin{array}{cc} X & X \\ -X & 1-X \\ \end{array} \right)$
Why is $\left( {\begin{array}{*{20}{c}} B & B \\ B & B \\ \end{array}} \right)$ positive semidefinite?
The largest field contained in $GL_2(\mathbb{R})$
Vectors: Why $a_1\mathbf{x}+b_1\mathbf{y}+c_1\mathbf{z}=a_2\mathbf{x}+b_2\mathbf{y}+c_2\mathbf{z}\implies a_1=b_1$ etc?
How do I prove $\frac{b_1}{d_1} < \frac{b_2} {d_2}$ implies $\frac{с_1 + b_1}{c_2 + d_1} < \frac{с_1 + b_2}{c_2 + d_2}$?
Prove that an augmented matrix $\begin{matrix} (A_1&A_2)\end{matrix} \begin{pmatrix} B_1\\B_2 \end{pmatrix} = A_1B_1+A_2B_2$
Prove that $\min{\left(\frac{a_1}{b_1},\frac{a_2}{b_2}\right)}\leq\frac{a_1+a_2}{b_1+b_2}\leq\max{\left(\frac{a_1}{b_1},\frac{a_2}{b_2}\right)}$
|
CommonCrawl
|
Proceedings of the 14th Annual MCBIOS Conference
Deep learning architectures for multi-label classification of intelligent health risk prediction
Andrew Maxwell1,
Runzhi Li2,
Bei Yang2,
Heng Weng3,
Aihua Ou3,
Huixiao Hong4,
Zhaoxian Zhou1,
Ping Gong5 &
Chaoyang Zhang1
Multi-label classification of data remains to be a challenging problem. Because of the complexity of the data, it is sometimes difficult to infer information about classes that are not mutually exclusive. For medical data, patients could have symptoms of multiple different diseases at the same time and it is important to develop tools that help to identify problems early. Intelligent health risk prediction models built with deep learning architectures offer a powerful tool for physicians to identify patterns in patient data that indicate risks associated with certain types of chronic diseases.
Physical examination records of 110,300 anonymous patients were used to predict diabetes, hypertension, fatty liver, a combination of these three chronic diseases, and the absence of disease (8 classes in total). The dataset was split into training (90%) and testing (10%) sub-datasets. Ten-fold cross validation was used to evaluate prediction accuracy with metrics such as precision, recall, and F-score. Deep Learning (DL) architectures were compared with standard and state-of-the-art multi-label classification methods. Preliminary results suggest that Deep Neural Networks (DNN), a DL architecture, when applied to multi-label classification of chronic diseases, produced accuracy that was comparable to that of common methods such as Support Vector Machines. We have implemented DNNs to handle both problem transformation and algorithm adaption type multi-label methods and compare both to see which is preferable.
Deep Learning architectures have the potential of inferring more information about the patterns of physical examination data than common classification methods. The advanced techniques of Deep Learning can be used to identify the significance of different features from physical examination data as well as to learn the contributions of each feature that impact a patient's risk for chronic diseases. However, accurate prediction of chronic disease risks remains a challenging problem that warrants further studies.
Chronic diseases are responsible for the majority of healthcare costs worldwide [1, 2]. An early diagnosis from an expert can help save a patient in terms of healthcare costs and extend the lifespan and quality of life for a patient. Early diagnosis of a chronic disease is often difficult due to the complexity and variability of the factors that lead to the disease. In an effort to help physicians diagnose these types of diseases early, computational models are being utilized to predict if a patient shows signs of one or more types of chronic diseases. The advantage of modern big data analysis allows physicians to infer information from patient data with less computational time and cost. This will allow physicians to build powerful tools for the purposes of intelligent health risk prediction.
Recently, deep learning techniques are being used for all different purposes with great success and are becoming more popular within various disciplines. Because of its generality, similar architectures put together through deep learning can be applied to many classification problems. Particularly within the medical field they are increasingly being used as a tool for multi-label classification. For example, Mayr et al. use a Deep Neural Network as a way to identify different sets of chemical compounds for toxicity prediction for humans [3], Lipton et al. use Recurrent Neural Networks to analyze time-series clinical data to classify 128 different diagnoses [4], and Esteva et al. use Convolutional Neural Networks to identify skin-cancer [5].
In this study, hypertension, diabetes, and fatty liver are three chronic diseases that are analyzed to predict types of chronic diseases for a patient. The diagnosis that is given for a certain patient can be one of the three, some combination of the diseases, or can be diagnosed as showing no signs of any of the diseases. This means that overall there are eight different diagnoses that can be given.
The layout of the paper is as follows: Methods will describe the two Deep Learning architectures that were used as a predictor for the multi-label classification dataset, the different types of algorithms that serve as a benchmark for comparison purposes, and explain evaluation methods that show how Deep Learning architectures perform when compared against traditional and other similar multi-label classification type methods; Results will describe the data and report the differences of performance between the methods chosen; Finally, discussion and conclusions are made about the performance of deep learning architectures for the purposes of predicting chronic diseases in physical examination records.
Several different machine learning methods are brought together to compare the performance of Deep Learning architectures on the physical examination data. In this section, combinations of traditional machine learning methods are used, plus there are a few methods that were specifically developed to solve multi-label classification problems. The other traditional methods can be used to solve multi-label problems, but generally involves some manipulation of the dataset in order for the algorithm to interpret targets of a dataset correctly. In other words, it transforms a multi-label dataset into a single-label dataset with multiple classes. There are many different techniques that have been used to handle this type of conversion. There are generally two categories for multi-label classification problems: problem transformation or algorithm adaption methods. One of the more popular problem transformation techniques is called the Label Powerset (LP) [6], where each unique set of labels for a multi-label dataset is considered a single label. This unique set of labels is considered a powerset. A classifier is trained on these powersets in order to make a prediction. Some of the following methods make use of this particular technique in order to handle multi-label classification. However, there are some drawbacks when manipulating the data to suit this format. It is common for LP datasets to end up with a large amount of represented classes and few samples of each class to train on. An advantage that Deep Learning methods have over similar problem transformation techniques is that it can train on the original data without needing to resort to some type of conversion of the data. These Deep Learning methods fall more into the algorithm adaptation category.
Ensemble methods
There are a couple of methods that were used to compare against the Deep Learning techniques that make use of, or have a variation of, the LP transformation. In particular, the Random k-Labelsets (RAkEL) method for multi-label classification [7, 8] is one such method that utilizes LPs to train on groups of smaller, randomly selected sets of labels, which are of size k, using different classifiers on groups of LPs, then uses a majority voting rule as the basis for selecting target values. If the average of the predictions for a label is above a certain threshold, then the label is chosen as true for that instance.
The ELPPJD method [9] is an ensemble multi-label classification method that uses a technique similar to LP and RAkEL where the data is transformed into a multi-class problem, then performs a joint decomposition subset classifier method to handle imbalanced data. This joint decomposition creates subsets of the data based upon the number of samples per LPs.
The following section describes the classification methods that we used for prediction. Besides the Deep Learning methods, most of these classifiers were part of a single label, multiclass step when used with RAkEL and MLPJTC after the dataset transformation. These classifiers were the "base" classifiers for the previous mentioned multi-label classification methods.
A Multilayer Perceptron (MLP) [10] is a machine learning method that was originally developed to try and discover if researchers can simulate how a brain operates. As researchers added more improvements to this method such as backpropagation [11], it became one of the more common classification tools because of the way that the network could infer information about the data in the absence of a priori information. The architecture of an MLP is usually described as having a network of layers where there are at least three layers: an input layer, hidden layer, and an output layer. Each of these layers is built with multiple different nodes that have edges, or weights, connecting to each successive layer in the network. Each node in the network calculates the synaptic weight of the connections of the previous layer and then passes the results of this to an activation function, usually some sigmoidal type of function. Eq. 1 shows the calculation of the synaptic weight of a single node at position j and all previous N edges connected to the node with some additional bias b, which is generally random Gaussian noise defined as b~N(0, 1). In Eq. 1, X i is the input node of the previous layer node position (i) with feature length N in the network and W ij is the associated weight for the link connecting node i in the previous layer and the node O j in the current layer. Eq. 2 represents the activation function of the node, where ϕ is the sigmoid function, but could easily be any number of other activation functions such as the hyperbolic tangent function.
$$ {O}_j=\sum \limits_{i=1}^N{X}_i{W}_{ij}+{b}_j $$
$$ \phi =\frac{1}{1+{e}^{-\left({O}_j\right)}} $$
The number of nodes for an input layer is typically the features or attributes of a dataset, and the connections of the input layer to the hidden layer can be different depending on how many nodes are selected for the hidden layer. The hidden layer can consist of multiple different layers stacked together, but it is generally assumed that the performance of an MLP does not increase past two layers. The hidden layer is connected to the output layer, where the output layer is the same number of classes that are getting predicted. The calculation above happens for each node in the network until the output layer is reached. At this point, called a forward pass, the network has tried to learn about the sample passed in, and has made a prediction about that data, where the nodes of the output layer are probabilities that the sample is of a certain class. This is at the point where backpropagation takes over. Since this is a supervised technique, an error between the prediction y j and the target t j of the sample n is calculated as the difference between the two values (Eq. 3) and passed to a loss function (Eq. 4) to determine a gradient, which allows the network to adjust, or back propagate, all of the weights between each node up or down depending upon the gradient of the error (Eq. 5). Eq. 5 shows the equation for a gradient descent method. In general, it is an optimization function min θ (ε(n| θ)) where θ is the vector of parameter values. Δw j (n) represents the change in weight for the node at position j for sample n, α is a parameter called the learning rate, which determines how much to move in the direction of the gradient, y i is the prediction from the output layer, and \( \frac{d}{dn}\varepsilon \left(n|\theta \right) \) is the gradient of the loss function.
$$ {e}_j={t}_j(n)-{y}_j(n) $$
$$ \varepsilon \left(n|\theta \right)=\frac{1}{N}\sum \limits_j^N{e}_j^2(n) $$
$$ \Delta {\mathrm{w}}_{\mathrm{j}}\left(\mathrm{n}\right)=-\upalpha \frac{\mathrm{d}}{\mathrm{d}\mathrm{n}}\upvarepsilon \left(\mathrm{n}|\uptheta \right){\mathrm{y}}_{\mathrm{i}}\left(\mathrm{n}\right) $$
This process of a forward pass and backpropagation continues until a certain number of iterations are met, or the network converges on an answer. Another way to look at the method is that the architecture is using the data to find a mathematical model or function to best describe the data. As the network is trying to learn, it is constantly searching for a global minimum value such that predictions can be accurate.
The C4.5 algorithm [12] is a classification method that is used to build a decision tree. It uses the concept of information gain and attributes of the data to split nodes of a tree into one class or another. It decides the best attribute of the data to properly split samples of the data and follows some base cases to add more nodes to the tree.
Support Vector Machines (SVM) work by trying to separate the classes from samples of a data into different hyperplanes. It tries to maximize the distance between classes as much as possible. It can use one hyperplane for linear classification, or it can have an infinite number of hyperplanes for nonlinear classification. The way that this is achieved is utilizing kernel functions that have the ability to linearly separate the data.
For this study, there were two different implementations of SVM algorithms that were tested with the physical examination dataset. One implementation used Sequential Minimal Optimization (SMO) [13] while the other is a slight variation of the SMO algorithm that was developed from the library package LibSVM [14, 15].
Random Forest is another decision tree type algorithm that takes advantage of the concept of bagging, or using many different learned models together to make an accurate prediction [16]. It creates a collection of different decision trees based on random subsets of samples per tree and decides which class to predict by employing a voting mechanism to rank the decisions.
ML-KNN is an extension of the k nearest neighbors algorithm for multi-label classification [17]. It works by determining the k nearest neighbors for an instance as it is passed to the algorithm, then the information gained from the labels that are determined to be mostly associated with the instance is used to predict the appropriate LP for the unseen instance. BP-MLL is multi-label neural networks algorithm that can be considered for performance comparison, which will be included in our future work. This algorithm was successfully applied to classification of functional genomics and text categorization [18].
Deep learning architectures
Deep Learning architectures are becoming more popular as a set of tools for machine learning. For multi-label classification, these types of systems are performing very well, even sometimes outperforming humans in certain aspects. Here, Deep Learning methods are used to predict chronic diseases for intelligent health risk prediction. What follows is a brief description of the types of architectures that we implemented when using physical examination records to predict chronic diseases. There are two different implementations of the DNN used for multi-label classification: one for problem transformation, and another for algorithm adaptation.
Deep Neural Networks (DNN) are an extension of the MLP and is usually considered a DNN if the MLP has multiple hidden layers [19, 20]. In addition to multiple layers, there are different types of activation functions and gradient descent optimizers that help to achieve a solution to an issue that MLPs suffer from which is the vanishing gradient problem. The vanishing gradient problem arises whenever a network is trying to learn a model, but the gradients of an error are so small that adjustments to the weights through backpropagation almost make no difference to the learning process and gets to a point of never reaching a global minimum. As mentioned before, there are different activation functions that are typically used for MLPs and DNNs, such as sigmoid or hyperbolic tangent functions. However, specifically for Deep Learning, different activation functions have been proven to achieve better results in certain cases. One of these activation functions is called a Rectified Linear Unit (ReLU). For some activation functions, the evaluation of a node can lay between negative one and positive one. However, for the ReLU function, an evaluation that is below zero is cut off and the value can only be between zero and one, or more formally f(x) = max(0, x) where x is the result of the equation coming from the node of the network. Gradient descent optimizers are optimization algorithms used for the purposes of finding a local minimum. Hyper parameters such as learning rates and momentum serve these gradient descent algorithms by shifting how much to move through a function space in order to converge on a global minimum. If a value is either too low or too high then the optimizer may miss the global minimum entirely and focus on a local minimum, or perhaps it may never converge at all.
To optimize the hyper parameters of these deep learning networks we opted to go with a grid search to find the best solution and let the networks converge on a model that suits the data. A grid search is one in which there are multiple different variables one should account for in a deep learning model to reach the global minimum as fast or as accurate as possible. For the multilayer perceptron, there were three different parameters: epochs, learning rate, and hidden layers. In practice, these are the parameters that changed prediction results the most. Epochs are how many iterations of the data the network will be used to train a model, the learning rate is how fast or slow the gradient decent optimizer adjusts to reach the minimum, and hidden layers refer to the number of individual layers between the input and output layers. The DNNs in our example are fully connected networks, meaning that each node contains a connecting edge to all of the nodes in the successive layer in the network. Hidden layer units are the number of nodes that exist in each individual hidden layer in the network. The number of units that were chosen came to be 35. This is based on one of the parameters that WEKA uses for their multi-layer perceptron, where they use the equation a = (attributes + classes)/2 to determine some number of units for a layer.
There are also some different activation functions that were used, either the sigmoid function or ReLU, and dropout layers were also chosen. Dropout was developed for the purposes of helping a network avoid overfitting [21]. The basic idea behind dropout is to block certain nodes from firing in the network and allow other nodes the opportunity to learn through different connections or infer different information by only allowing access to certain information. There are differing opinions on whether or not one should allow dropout between each layer, or only during the last hidden layer and output. In this study both options are investigated to get an overall view of how the network performs.
Determining the cost function for a network can make a large difference in the accuracy of the network so special care should be taken to examine whether or not the right cost function is used. For single label data, a softmax function was used for the output layer. The reason for this is straightforward. The equation for the softmax function is as follows:
$$ \sigma {(n)}_i=\frac{e^ni}{\sum_{k=1}^K{e}^nk}\mathrm{for}\ i=1\dots K $$
where a vector of n values of length K is normalized against the exponential function. The idea behind the softmax function is to normalize the data such that the values of the output layer in the network lie in the range (0, 1) and the sum total of the values equal 1. These values can then be interpreted as probabilities, where the highest probability is most likely the best candidate label for the sample in the dataset. Of course, this is acceptable for single label data because each label is considered mutually exclusive. For multi-label data another option should be considered. Because we cannot use softmax in this case, we should use some other function that has a range of (0, 1) so that these can be interpreted as probabilities. The sigmoid function is a good use for this task. Since the predictions in the output layer of the network are independent of the other output nodes, we can set a threshold to determine the classes for which the sample belongs. In our case, the threshold θ for the output layer is 0.5 (Eq. 7). When selecting θ, analyzing the output of the prediction values to find the range will help to guide selection of the threshold value.
$$ \boldsymbol{f}\left(\boldsymbol{x}\right)=\left\{\begin{array}{c}0,\kern0.5em \boldsymbol{x}<\boldsymbol{\theta} \\ {}1,\kern0.5em \boldsymbol{x}\ge \boldsymbol{\theta} \end{array}\right.,\boldsymbol{where}\ \boldsymbol{\theta} =0.5 $$
Evaluation methods
In order to compare these different methods, accuracy cannot be the single metric used to determine the effectiveness of an algorithm. There are multiple other methods that typically get used to get an overall census on how a method performs. For example, one method could have a very high accuracy, but the data could be imbalanced and the model could be biased towards some certain class that dominates the dataset and only selects that class as the prediction based on the training data, ensuring that most of the guesses are labeled correct even though it is simply selecting the dominating class most of the time without actually learning any information about the data.
The metrics that are used to compare the different methods are accuracy, precision, recall, and F-score. The accuracy of a method determines how correct the values are predicted. Precision determines the reproducibility of the measurement, or how many of the predictions were correct. Recall shows how many of the correct results were found. F-score uses a combination of precision and recall to calculate a score that can be interpreted as an averaging of both scores. The following equations show how to calculate these values, where TP, TN, FP, and FN are true positive, true negative, false positive, and false negative respectively.
$$ Accuracy=\frac{TP+ TN}{TP+ FP+ TN+ FN} $$
$$ Precision=\frac{TP}{TP+ FP} $$
$$ Recall=\frac{TP}{TP+ FN} $$
$$ F\ Score=\frac{2\times Precision\times Recall}{Precision+ Recall} $$
Classifier evaluation platform and development environment
The majority of classifiers were used with the software package WEKA, which as mentioned earlier is a common benchmark tool to evaluate the performance between multiple algorithms. There are two different categories of classifiers that were used with WEKA; one that used the GUI interface to run individual algorithms on the data that was transformed via the MLPJTC method, and the other category used the MULAN package that was built upon the WEKA API to handle the multi-label data. For multi-label classification, the RAkEL method from the MULAN package is used, and then the base classifier implemented through the WEKA API is used for classification of the data itself. In other words, the RAkEL method transforms the multi-label data in order for the classifiers to be run. The MLPJTC results are listed in Table 1 and the RAkEL results are listed in Table 2. An additional multi-label method, MLkNN, is also listed in Table 2. MLkNN was implemented in the MULAN package by the authors of RAkEL and is a method that was included for benchmark purposes. The deep learning architectures were implemented in the deep learning package TensorFlow, which is an API written in Python and developed by Google. TensorFlow provides a way to build deep neural networks using basic implementations of the different deep learning architectures, or the axioms of these architectures. TensorFlow also includes tools to evaluate performance and help with deciding how to manipulate parameters to allow the network to learn properly.
Table 1 The results of the classifiers for single-label, multi-class dataset
Table 2 The results of the classifiers for multi-label dataset
Dataset and preprocessing
The physical examination dataset is from a medical center where 110,300 anonymous medical examination records were obtained [9]. In the table of dataset, each row represents the physical examination record of a patient and each column refers to a physical examination item or feature, except for the last six columns that indicate disease types. The dataset includes 6 normal chronic diseases including hypertension, diabetes, fatty liver, cholecystitis, heart disease, and obesity and the prediction in this study focuses on the first three of them. Each type of six diseases corresponds to a class label in the classification. From over 100 examination items, 62 features were selected as significant based on expert knowledge and related literature. These items are 4 basic physical examination items, 26 blood routine items, 12 urine routine items, and 20 items from liver function tests. One may get more details about the dataset from [9] and website provided at the end of this paper.
In order to get some evaluations on the data, a ten-fold cross validation step is performed on the data, where 90% of the data is used for training and 10% is left for testing. Usually, random sampling is enough to get results from cross validation, however with the physical examination records another approach is needed because not all classes were being represented in the training for the model of the classifier.
From Fig. 1, when the data is transformed into a single label, multiclass problem it is apparent that there is a vast amount of imbalance in the data. This was a bit expected considering that we were transforming the dataset using the LP method. As mentioned in the beginning of the paper, it is common to end up a situation such as this, where some labels have a small representation of the overall dataset. The first two classes alone make up for 64.25% of the data. With such an imbalanced dataset, it is not hard to imagine that a classifier could tend to be biased towards the first two classes. A couple of strategies were employed to help the classifiers avoid biased predictions. The first is to stratify the training and testing datasets when randomly sampling for a ten-fold cross validation. Stratifying a dataset in this case means that the sampling is proportional to the original dataset. In other words, the sampling will maintain the percentage of class labels from the original data, but will ensure that each class is represented for training purposes. Another issue presented itself however because the lower classes did not have enough samples for the model to differentiate between specific instances when training. A way to help with this is to include oversampling of the lower classes so that more information can be gained for lower represented classes. One such implementation is the Synthetic Minority Over-sampling Technique (SMOTE) [22]. This method under-samples the majority class as well as over-samples the minority classes and additionally introduces some synthetic examples of the minority to fill some feature space for the class rather than simply oversample with replacement or making multiple copies of instances. According to the authors of SMOTE, this is an improvement technique that has worked well with handwritten character recognition.
The distribution of physical examination records for chronic diseases. Here, the list of chronic diseases are Fatty Liver (FL), Diabetes (D), Hypertension (H), a combination of these diseases (DFL, HFL, HD, HDFL), and the absence of the disease or classified as Normal (N)
Comparison of different classifiers
In Table 1, various popular classification methods are compared against each other to analyze the performance of the single-label, multi-class dataset. LibSVM and SMO are different types of support vector machines, MLP is the WEKA implementation of the Multilayer Perceptron, J48 is the Java implementation of the C4.5 decision tree algorithm, DNN represents the deep learning architecture that was implemented in TensorFlow, and RF is the Random Forest classifier.
The support vector machines were not able to handle the data as well as the decision tree type algorithms, which scored the best overall. MLP and DNN similarly scored lower than the decision tree algorithms. In the case of single label, multi-class, a bagging type algorithm does fairly well on this dataset.
For Table 2, the classifiers from Table 1 are used as a base classifier for the RAkEL method in order to handle multi-label classification. The difference here is in the MLkNN and DNN methods. These two methods could handle the data without first transforming it into a LP. In all cases of RAkEL except for SMO, the results were improved from the previous table. MLkNN performed the worst out of all methods. DNN had the best accuracy, but when considering the other metrics listed in the table, RAkEL with Random Forest as a base classifier was the best performing classifier overall. This makes sense, because not only is RAkEL creating random subsets of the data, but Random Forest is also generating subsets of the samples for its decision trees. This allows for a very large coverage of all the features to be able to strongly identify correlations in the data. These subsets could allow for more precision when making a prediction. The DNN architecture is trying to find correlations from the data as a whole without any type of ranking, voting, or making subsets of the samples, so there is a wider net of interpretation from the dataset. Also, different adjustments of hyper-parameters could help increase precision and recall values. This dataset in particular has a large amount of TN values which dominate the terms in the equation for accuracy. The model itself tended toward a negative prediction. This is one reason why accuracy was so high while other metrics were lower.
Optimization of deep learning parameters in single-label data
A grid search of hyper parameters was used when trying to find the optimal parameter to use with the physical examination dataset. When using a grid search one could randomly choose a set of parameters and train using the chosen set, then repeat until a certain number of runs were achieved, or another option would be to iterate through all possible combinations to get performance metrics for each run. The latter was chosen as the preferred method of evaluation in addition to the ten-fold cross validation step. The epochs, or iterations were 775 and 1000, the learning rate was 0.01, 0.05, 0.75, and 0.1. Hidden layers for the single label data were set as either 1 or 2. The Sigmoid and ReLU activation functions were also used for comparisons to evaluate how each of them compared.
Overall, the Sigmoid function performed better than the ReLU activation function when compared with the same hyper parameters, as shown in Fig. 2. Further analysis was performed to see the effects of adding multiple layers had on the network to learn about the data. As you can see from Fig. 3, accuracy drops drastically as multiple layers are introduced. The simpler the network is constructed, the better the accuracy becomes. Here, there are multiple dropout layers introduced to compare performance, including no dropout layers, one dropout layer between the last hidden layer in the network and the output layer, and dropout layers between every layer in the network. The results given in Fig. 3 show that when the network is past the fourth hidden layer, the network plateaus in performance. As more layers are introduced to the network, the issue of the vanishing gradient is more apparent and propagates to the other layers in the network more quickly as a consequence. In addition, for such a problem as this, the extra layers added more complexity to the model that may not reflect the complexity of the data itself.
Performance comparison of activation functions. The sigmoid and ReLU activation functions are compared against each other in the DNN architecture
A comparison of additional layers added to the MLP. The hyperparameters are: 1000 epochs, 0.1 learning rate, 35 hidden layer units, hidden layers from 1 to 10, and no dropout to one dropout layer to all dropout layers. These parameters were chosen because they gave the best overall performance for MLP with 1 or 2 layers
The structure of the DNN here is very similar to the implementation of the MLP provided by the WEKA software benchmark tool. However, there are some differences which accounts for the variation in the results. In terms of nodes in the network, each represented node in the WEKA version uses the sigmoid activation function including the output layer. For the loss function, the squared-error loss is used with backpropagation for learning. In the case of the TensorFlow implementation, the output layer of the network was made up of linear units that were not squashed by any activation function. For the loss function, a softmax function with cross entropy was used to calculate the error across the network, then it is passed to an optimizer that implements the Adam algorithm [23] for stochastic gradient optimization.
Impact of deep learning parameters for multi-label data
The following results are using the DNN architecture without any transformation of the data (algorithm adaptation) in order to obtain results for multi-label classification. The architecture is almost identical to the single label, multiclass data, however the cost function has changed. As previously mentioned, the cost function for this architecture has to be a bit different considering the fact that a prediction for a class is not mutually exclusive, so the sigmoid function with the addition of cross entropy was selected and a threshold (θ) is used on the results of the cross entropy calculation to determine whether or not a class is predicted for multi-label classification. It was found that the sigmoid activation function performed better than the ReLU and hyperbolic functions for this case. To verify that the results were consistent, different numbers of units per layer were tested. In Table 3, the DNN for multi-label data has the same hyper parameters as the previous best version, but the numbers of units per layer were tested with 35, 256, and 512 units. Similarly, the single label version also had better overall results from a less complex architecture, but because of the LP of the data, the distribution of classes were so varied and imbalanced that the metrics suffered some loss in the results. Particularly in the multi-label data, the accuracy seems to be better than other multi-label methods that were compared. Accuracy does generally give an overall view of the results of the architecture itself, but more importantly the other metrics such as precision, recall, and f-score truly give a better sense of the performance of the network. In the case of the DNN for multi-label data, the training metrics are pretty high, but the metrics for the testing data are lower than the training data. This indicates that the testing data has some wide variability that the network cannot grasp.
Table 3 DNN results for multi-label data with respect to different number of units
The specific architectures that were developed for the physical examination data were DNNs. However, there are a variety of different architectures that could have been chosen. In this case, it seemed that other architectures did not perform as well as DNNs, possibly due to the fact that the data itself is not so complex as to need the level of computation that other architectures like Convolutional Neural Networks or Recurrent Neural Networks would need. In addition to the complexity, the learning method of the data generally would fit a regression type of model to learn against the data, which does not necessarily fit the type of data that is generally associated with the other architectures. In most cases, such a type of classification of this data falls in the category of DNNs.
Figure 4 and Fig. 5 show the area under the precision-recall curve (AUPR) and the area under the receiver operating characteristic curve (AUROC). These two values combined together show the overall performance of a trained classifier, and have been used many times to determine the effectiveness of a model to predict a class [24]. The performance of the classifier is determined from each class independent of the other, and then together as micro and macro averaged scores. A micro-averaged score gives a value that considers the weight of each class label, whereas the macro-average score is an averaging of the individual scores across each label. The equations for micro and macro scores are shown below.
$$ {Precision}_{micro}=\frac{\sum_{i=1}^l{TP}_i}{\sum_{i=1}^l{TP}_i+{FP}_i}\kern0.48em {Precision}_{macro}=\frac{\sum_{i=1}^l\frac{TP_i}{TP_i+{FP}_i}}{l} $$
$$ {Recall}_{micro}=\frac{\sum_{i=1}^l{TP}_i}{\sum_{i=1}^l{TP}_i+{FN}_i}\kern0.48em {Recall}_{macro}=\frac{\sum_{i=1}^l\frac{TP_i}{TP_i+{FN}_i}}{l} $$
The Precision Recall (PR) curve for the testing dataset. The testing dataset which contained 10% of the data, or 11,030 instances. Class 0 is Hypertension, Class 1 is Diabetes, and Class 2 is Fatty Liver
The Receiver Operator Characteristic (ROC) curve of the testing data. Class 0 is Hypertension, Class 1 is Diabetes, and Class 2 is Fatty Liver
For the multi-label dataset, an increase in accuracy could be explained by the fact that each class has more training samples since the classes are not mutually exclusive. Considering the distribution of each LP in Fig. 1, the imbalanced data is less of an issue and each class is more likely to have some representation when random sampling for the training set. Some adjustment could be made to the threshold value when the prediction of the output layer is calculated, which could also improve the accuracy of the model.
The introduction of batch normalization has also improved the results of the training [25]. Batch normalization is the process in which mini-batches of the training data are used to step through the network instead of processing the entire training dataset as one step of training. The reason is to minimize the impact of the covariate shifts from the features of the input data, effectively normalizing the layers and reducing the need for other architecture regularization techniques such as dropout layers. Another advantage is that batch normalization can reduce the amount of epochs needed to train the network. For example, before batch normalization, our network achieved an accuracy of 89.90%, after 1000 epochs. After batch normalization using a batch size of 512, the accuracy increased to 92.07%, with only 100 epochs, significantly reducing the amount of training time.
Some architectures can be sensitive to initialization weights. Although the purpose of a Neural Network is to be able to adjust weights even from random initial values, setting the initial weights can significantly affect the results of the prediction depending on the architecture. In the described implementation, a truncated normal is used to initialize the weights within two standard deviations from the mean. The standard deviation was selected to be 0.001 with a mean of zero, so the random values ranged between 0 and 0.003. Previously implemented architectures used a randomized normal distribution for values ranging between zero and one, but selecting a truncated normal so close to zero increased all evaluation measures by a few points. This architecture seemed to learn fairly well no matter the initialization values. Evaluation measures varied only a small amount.
In this study, a multi-label classification method is developed using deep learning architectures for the purposes of predicting chronic diseases such as hypertension in patients for physicians. Such architectures are valuable tools as they are able to calculate correlations in the data through iterative optimization techniques. The results show that DNNs give the highest accuracy among all six popular classifiers. The F-score of DNNs is slightly lower (but compatible) than Random Forrest and MLP classifiers and but much higher than that of SVM and MLKNN classifiers. DNNs play a valuable role in the future of multi-label classification methods because they are able to adapt to the original data and can eventually find a decent optimized function even with rudimentary pieces from which to learn information. Some expert knowledge could vastly improve the rate and ease at which a network could learn the intricate details of a system. In this case, there are some areas of improvement that could be made in terms of the architecture and a thorough investigation of the way the data is passed through the architecture of the network should be considered. Further modification of this architecture could enhance the performance of the model in order to achieve better results for precision, recall, and f-score values. Deep learning architectures provide a powerful way to model complex correlations of features together to form an optimized function from which physicians can predict chronic diseases. Additional improvements to the model could easily allow for the inclusion of other chronic diseases as newer data is gathered.
CDC. The power of prevention chronic disease... The public health challenge of the 21 st century; 2009. p. 1–18. Available from: http://www.cdc.gov/chronicdisease/pdf/2009-Power-of-Prevention.pdf
Lehnert T, Heider D, Leicht H, Heinrich S, Corrieri S, Luppa M, et al: Review: Health Care Utilization and Costs of Elderly Persons With Multiple Chronic Conditions. Med. Care Res. Rev. [Internet]. SAGE Publications Inc; 2011;68:387–420. Available from: https://doi.org/10.1177/1077558711399580
Mayr A, Klambauer G, Unterthiner T, Hochreiter S. DeepTox: Toxicity prediction using deep learning. Front. Environ. Sci. [Internet]. 2016;3. Available from: http://journal.frontiersin.org/Article/10.3389/fenvs.2015.00080/abstract
Lipton ZC, Kale DC, Elkan C, Wetzel R: Learning to diagnose with LSTM recurrent neural networks. Int. Conf. Learn. Represent. 2016 [Internet]. 2016. p. 1–18. Available from: http://arxiv.org/abs/1511.03677
Esteva A, Kuprel B, Novoa RA, Ko J, Swetter SM, Blau HM: Dermatologist-level classification of skin cancer with deep neural networks. Nature [Internet]. 2017;542:115–118. Macmillan Publishers Limited, part of Springer Nature. All rights reserved.; Available from: https://doi.org/10.1038/nature21056.
Tsoumakas G, Katakis I: Multi-label classification: an overview. Int J Data Warehous Min. [Internet]. 2007;3:1–13. IGI Global; [cited 2017 Apr 10]. Available from: http://services.igi-global.com/resolvedoi/resolve.aspx?doi=10.4018/jdwm.2007070101.
Tsoumakas G, Katakis I, Vlahavas I: Random k-labelsets for multilabel classification. IEEE Trans Knowl Data Eng. [Internet]. 2011;23:1079–1089. [cited 2017 Apr 10];Available from: http://ieeexplore.ieee.org/document/5567103/.
Tsoumakas G, Vlahavas I: Random k-labelsets: an ensemble method for multilabel classification. Mach. Learn. 2007 [cited 2017 Apr 10]. p. 406–17. ECML 2007 [Internet]. Berlin, Heidelberg: Springer Berlin Heidelberg; Available from: http://link.springer.com/10.1007/978-3-540-74958-5_38
Li R, Liu W, Lin Y, Zhao H, Zhang C: An ensemble multilabel classification for disease risk prediction. J. Healthcare Engineering, vol. 2017, Article ID 8051673, 10 pages. https://doi.org/10.1155/2017/8051673.
Rosenblatt F. The perceptron: a probabilistic model for information storage and organization in the brain. Psychol Rev. 1958;65:386–408.
Rumelhart DE, Hinton GE, Williams RJ: Learning internal representations by error propagation. Parallel Distrib. Process. Explor. Microstruct. Cogn. vol. 1 [Internet]. MIT Press; 1986 [cited 2017 Apr 11]. p. 318–62. Available from: http://dl.acm.org/citation.cfm?id=104293
Salzberg SL. C4.5: Programs for machine learning by J. Ross Quinlan. Morgan Kaufmann publishers, inc., 1993. Mach Learn. 1994;16:235–40. [cited 2017 Apr 11];.[Internet]. Kluwer Academic Publishers; Available from: http://link.springer.com/10.1007/BF00993309
Keerthi SS, Shevade SK, Bhattacharyya C, KRK M. Improvements to Platt's SMO algorithm for SVM classifier design. Neural Comput. 2001;13:637–49. [Internet]. MIT Press; [cited 2017 Apr 11]. Available from: http://www.mitpressjournals.org/doi/10.1162/089976601300014493
Chang CC, Lin CJ. LIBSVM: a library for support vector machines. ACM Trans Intell Syst Technol. 2011;2:1–27. [cited 2017 Apr 11] Internet]. ACM; Available from: http://dl.acm.org/citation.cfm?doid=1961189.1961199
Fan RE, Chen PH, Lin CJ: Working set selection using second order information for training support vector machines. J Mach Learn Res [Internet] 2005;6:1889–1918. Available from: http://dl.acm.org/citation.cfm?id=1194907
Breiman L. Random forests. Mach Learn. [Internet]. 2001 45:5–32. Kluwer Academic Publishers; [cited 2017 Apr 11]; Available from: http://link.springer.com/10.1023/A:1010933404324.
Zhang ML, Zhou ZH: ML-KNN: A lazy learning approach to multi-label learning. Pattern Recognit. [Internet]. 2007 [cited 2017 Apr 11];40:2038–48. Available from: http://www.sciencedirect.com/science/article/pii/S0031320307000027
Zhang M, Zhou Z, Member S: Multilabel neural networks with applications to functional genomics and text categorization. IEEE Trans Knowl Data Eng [Internet] 2006;18:1338–1351. Available from: http://ieeexplore.ieee.org/lpdocs/epic03/wrapper.htm?arnumber=1683770
Bengio Y. Learning deep architectures for AI. Found. Trends®. Mach Learn. 2009;
Bengio Y, Courville A, Vincent P: Representation learning: A review and new perspectives. arXiv Prepr. arXiv … [Internet]. 2012;1–34. Available from: http://arxiv.org/abs/1206.5538%5Cnhttps://s3-us-west-2.amazonaws.com/mlsurveys/110.pdf
Srivastava N, Hinton G, Krizhevsky A, Sutskever I, Salakhutdinov R. Dropout: a simple way to prevent neural networks from overfitting. J Mach Learn Res. 2014;15:1929–58.
Chawla NV, Bowyer KW, Hall LO, Kegelmeyer WP. SMOTE: synthetic minority over-sampling technique. J Artif Intell Res. 2002;16:321–57.
Kingma DP, Ba JL. Adam: a method for stochastic optimization. Int Conf Learn Represent. 2015;2015:1–15.
Davis J, Goadrich M. The relationship between precision-recall and ROC curves. Proc. 23rd Int. Conf. Mach. Learn. - ICML '06 [Internet]. 2006;233–40. Available from: http://portal.acm.org/citation.cfm?doid=1143844.1143874
Ioffe S, Szegedy C: Batch normalization: accelerating deep network training by reducing internal covariate shift. Proc. 32nd Int. Conf. Mach. Learn. [Internet]. 2015 [cited 2017 Jun 24];448–56. Available from: http://arxiv.org/abs/1502.03167
We thank the Collaborative Innovation Center on Internet Healthcare and Health Service of Henan Province, Zhengzhou University for providing medical records for analysis in this study.
The work was partially supported by the USA DOD MD5i-USM-1704-001 grant and by the Frontier and Key Technology Innovation Special Grant of Guangdong Province, China (No. 2014B010118005). The publication cost of this article was funded by the DOD grant.
The physical examination dataset used in this study is located at http://pinfish.cs.usm.edu/dnn/. There are two versions of the data available for download: a simple text file and an ARFF file for use with WEKA. Details about the format of the data are located on the webpage. The personally identifiable information from this dataset has been removed to ensure patient anonymity.
About this supplement
This article has been published as part of BMC Bioinformatics Volume 18 Supplement 14, 2017: Proceedings of the 14th Annual MCBIOS conference. The full contents of the supplement are available online at https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-18-supplement-14.
School of Computing, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
Andrew Maxwell
, Zhaoxian Zhou
& Chaoyang Zhang
Cooperative Innovation Center of Internet Healthcare, School of Information & Engineering, Zhengzhou University, Zhengzhou, 450000, China
Runzhi Li
& Bei Yang
Department of Big Medical Data, Health Construction Administration Center, The Second Affiliated Hospital of Guangzhou University of Chinese Medicine, Guangzhou, China
Heng Weng
& Aihua Ou
Division of Bioinformatics and Biostatistics, National Center for Toxicological Research, US Food and Drug Administration (FDA), Jefferson, AR, 72079, USA
Huixiao Hong
Environmental Lab, US Army Engineer Research and Development Center, Vicksburg, MS, 39180, USA
Ping Gong
Search for Andrew Maxwell in:
Search for Runzhi Li in:
Search for Bei Yang in:
Search for Heng Weng in:
Search for Aihua Ou in:
Search for Huixiao Hong in:
Search for Zhaoxian Zhou in:
Search for Ping Gong in:
Search for Chaoyang Zhang in:
CZ, and PG conceived the project. AM implemented the deep learning architectures and performed the analysis with other classifiers. RL developed the MLPTJC method that was used for comparisons of different classifiers for the single-label and multiclass classifiers. AM and CZ analyzed the results and wrote the paper. HW, AO and ZZ participated in the development of deep learning methods. BY, ZZ and HH provided advice and suggestions for the experiment and proofread the document. All authors have read and approved final manuscript.
Correspondence to Heng Weng or Chaoyang Zhang.
Maxwell, A., Li, R., Yang, B. et al. Deep learning architectures for multi-label classification of intelligent health risk prediction. BMC Bioinformatics 18, 523 (2017) doi:10.1186/s12859-017-1898-z
DOI: https://doi.org/10.1186/s12859-017-1898-z
Deep neural networks
Intelligent health risk prediction
Multi-label classification
Medical health records
|
CommonCrawl
|
Home Journals ISI Resource Classification and Knowledge Aggregation of Library and Information Based on Data Mining
Resource Classification and Knowledge Aggregation of Library and Information Based on Data Mining
Qin Xiao
Library and Information Center, City College of Dongguan University of Technology, Dongguan 523419, China
[email protected]
The traditional knowledge service systems have nonuniform data structures. Some data are structured, while some are semi-structured and even non-structured. Big data technology helps to optimize the integration and retrieval of the massive data on library and information (L&I), making it possible to classify the resources and optimize the configuration of L&I resource platforms according to user demand. Therefore, this paper introduces the new information service model of big data resources and knowledge services to the processing of L&I data. Firstly, the data storage structure and relationship model of the L&I resource platform were established, and used to sample and integrate the keywords of resource retrieval. Next, an L&I resource classification model was constructed based on support vector machine (SVM), and applied to extract and quantify the attributes of the keywords of resource retrieval. After that, a knowledge aggregation model was developed for a complex network of multiple L&I resource platforms. Experimental results demonstrate the effectiveness of the proposed knowledge aggregation model. The research findings provide a reference for the application of data mining in resource classification.
knowledge aggregation, resource classification, library and information (L&I), data mining, support vector machine (SVM)
Since it was conceptualized in 2008, big data has become a hot topic in the academia. In the meantime, data mining has been increasingly applied in various industries [1-3]. In particular, the application of data mining in library and information (L&I) attracts much attention from experts and scholars [4-6]. With the help of data mining, researchers have optimized the aggregation and retrieval of massive L&I data, and acquired better capability to retrieve, identify, and make intelligent analysis of such data. Hence, data mining brings new opportunities to the informatization and intellectualization of L&I management system.
Traditionally, L&I resources are classified based on access control and optimal configuration [7-9]. Raflesia et al. [10] extracted and vectorized the attributes of L&I resources, in the light of the text documents about these attributes. Based on support vector machine (SVM) classification algorithm, Antoniy et al. [11] established an automatic classification model for L&I resources, integrated the sequential minimal optimization (SMO) to effectively improve the classification efficiency, and optimized the classification effect through grid search of the optimal algorithm parameters. Using the real-time information of the L&I resource set during the update, Losee [12] constructed a resource classification model, and verified its feasibility and effectiveness through experiments on multi-source L&I resource data. After exploring deep into the unified management of L&I resources, Tella et al. [13] highlighted the importance of resource management to real-time L&I resource classification, and put forward clear standards for resource classification, principles for differentiating between new and old resources, and effective measures to link up the two kinds of resources; in addition, an L&I resource classification system was developed for the unified management of L&I resources, including 4 A-level classes, 12 B-level classes, and 25 C-level classes. Considering the similarity between same-class L&I resources in content, theme, and features, Jerrett et al. [14] constructed a thematic L&I resource classification model based on long short-term memory (LSTM) network, and demonstrated the superiority and feasibility of the model through experiments on the CNKI database for the Belt and Road Initiative (BRI).
The information knowledge generated from L&I resources face several problems: the knowledge points are scattered and fragmented, the quality is uneven, and the contents are complex and redundant. In addition, there is a lack of direct channels between multi-source L&I resource platforms. It is time-consuming to browse and acquire knowledge on multiple platforms [15-17]. Many scholars have explored the ways to aggregate the knowledge in L&I resources, aiming to scientifically organize, mine, and manage the knowledge, and to innovate the knowledge service model [18-21]. For example, Kankonsue et al. [22] defined the connotations of knowledge aggregation of multi-source L&I resources, effectively organized the knowledge contained in L&I resources, and mined the associations between the knowledge. Borrego [23] proposed a knowledge aggregation strategy based on topic-generated multi-source L&I resources: the topic probability model of latent Dirichlet allocation (LDA) was combined with the hybrid neural network BiLSTM-CNN-CRF (bidirectional LSTM-convolutional neural network-conditional random field) to learn and segment the contents, and to generate knowledge topics. Kalenov et al. [24] produced knowledge summaries of multiple L&I resources, using maximal marginal relevance (MMR) algorithm and word2vec model. After mining user interests, Ammar et al. [25] provided a knowledge aggregation and accurate recommendation strategy for multi-source L&I resources, and calculated the user similarity between multi-source L&I resource platforms, creating a robust user network.
Big data technology makes it possible to classify the resources and optimize the configuration of L&I resource platforms according to user demand, and unify the nonuniform data structures (structured, semi-structured, or non-structured) of traditional knowledge service systems. With the aid of data mining, this paper introduces the new information service model of big data resources and knowledge services to the processing of L&I data. Firstly, the keywords of resource retrieval were sampled and integrated based on the data storage structure and relationship model of the L&I resource platform. Next, an SVM-based L&I resource classification model was constructed to extract and quantify the attributes of the keywords of resource retrieval. Then, a knowledge aggregation model was developed for a complex network of multiple L&I resource platforms, and proved effective through experiments.
2. Sampling and Integration of L&I Data
Inspired by bibliometric co-citation, this paper samples and optimizes the L&I data, aiming to optimize the resource configuration, and to aggregate and retrieve the knowledge of L&I resources in the context of big data. Figure 1 models the storage structure of the target L&I resource platform.
Figure 1. The storage structure and relationship model of L&I resource platform
Let A={a1,a2,…,aN} be the set of keyword attributes of the retrieval nodes in the L&I resource database, and {(x1,y1),(x2,y2),…,(xN,yN)} be the binary semantic feature function of the keywords at the retrieval nodes. By reconstructing the feature space of L&I resources, the radio-frequency identification (RFID) tag recognition model of the L&I resources can be established as:
$a_{i}^{(l+1)}=(1-\lambda) a_{i}^{(l)}$$+\frac{\lambda}{y_{N i}}\left(\varepsilon_{i}-\sum_{j=1}^{i-1} y_{i j} a_{j}^{(l+1)}\right.$$\left.-\sum_{j=i+1}^{n} y_{i j} a_{j}^{(l)}\right)$ (1)
where, λ is the attribute weight of the keywords at each retrieval node. The attributes of L&I resources were classified according to the set of attribute classes Bi(i=1, 2, …, N). Considering the difference in the keyword catalogs of L&I resource retrieval, the L&I data were sampled by the following model:
$X_{\varepsilon}=\sum_{i=1}^{B} \sigma_{i}\left(\bar{c}_{i}-\bar{c}\right)\left(\bar{c}_{i}-\bar{c}\right)^{T}$ (2)
where, $\overline{c_{i}}$ is the mean of keywords at each retrieval node; σi is the probability distribution of keyword attributes at each retrieval node. The feature analysis of L&I resources can be performed based on the results of formulas (1) and (2).
Let U={A1,A2,…,AN} be the vector distribution set in the storage space F. Then, the features of the semantic concept set for the keyword management at L&I retrieval nodes can be extracted by:
$H\left(\bar{A}_{j}\right)=\frac{f_{j}^{T} X_{\varepsilon} f_{j}}{\eta_{j}}$ (3)
where, ηj and fj are the weight and frequency of concept j that describes keyword attributes, respectively; Xε is the total number of concepts in the keyword text at each retrieval node.
The attributes of the retrieval keywords for L&I resources were classified by the difference in attribute distribution. Let Qi(i=1, 2, …, N) be the set of independent feature samples in the attribute distribution. Then, the RFID tag of the sample set can be calculated by:
$q(t)=\sum_{M=-\infty}^{\infty} \sum_{N=-\infty}^{\infty} y_{m n} h_{m n}(t)+b(t)$ (4)
where, yMN is the distribution sample set of keyword retrieval of L&I resources; hMN(t) is the fuzzy association between keyword attributes of L&I resources; b(t) is the characteristic interference for keyword management of L&I resources.
The storage space was divided U times into u=F/U. Let Q=(q1,q2.........qU) be the characteristic distribution of key indices of keyword attributes, and [rj, tj] be the association rule points of retrieval keywords. Then, qj belongs to the interval [rj, tj] in a limited dataset.
Based on the above analysis, the RFID tagging technology was introduced to automatically sample the keyword attributes of L&I resources. Then, the keyword attributes were extracted based on semantic similarity:
$K A_{j}=\frac{\sum_{l=1}^{n}\left(\sigma_{l j}\right)^{2}}{q(t)}$ (5)
where, σlj is the weight for the feature extraction of each keyword attribute. Through the above steps, the keyword features of L&I resource retrieval can be sampled automatically.
Let AR3=(Wα3,Wβ3,E3) be the set of association rules between keyword attributes. Then, the set of constraints satisfies the condition that AR3 is greater than AR1, and smaller than AR2. Let W=(ω1,ω2,…,ωN)T be the weight vector under each alternative keyword retrieval scheme, where weight ωi falls within [0, 1].
Considering the equivalence relationship of semantic mapping, the link set of the keyword attribute distribution satisfies P1∈RN×N, P2∈RM×M, and P3∈RM×N. Then, the ontology index set of the keyword attribute integration can be defined as:
$D=\left[D_{C C}, D_{C}, D_{R C}, D_{R}, A R_{3}\right]$ (6)
where, DCC is the set of concepts of keyword attributes; DC is a concept of keyword attribute; DRC is the set of keyword attribute relationships; DR is a keyword attribute relationship.
Let FAN(j)l|l-1 be the fusion attribute of keyword eigenvectors. To integrate keyword attributes and schedule the association rules of L&I resources, a data fusion scheduling model can be established based on the fuzzy c-means (FCM) adaptive learning algorithm:
$F A_{l \mid l-1}^{N(j)}=\frac{1}{\sqrt{\bar{c}}}\left[F A_{1, l \mid l-1}^{N(j)}-\delta_{l \mid l-1}^{N(j)}, \cdots F A_{M, l \mid l-1}^{N(j)}\right.$$\left.-\delta_{l \mid l-1}^{N(j)}\right]$ (7)
Once the fusion class set of keyword attributes was ready, the relevance features were analyzed in the keyword attribute database, and a semantic ontology model was constructed to reflect the classification of retrieval keywords.
Drawing on the idea of semantic ontology and language evaluation, the context distribution features of the text of each keyword were established in the rough set model of proximity. Then, a multi-layer attribute feature space was set up in the L&I retrieval catalog information database. The context weight of the text of each keyword was set to W'=((ω1,y'1),…,(ωn,y'n))T, where weight ωi falls within [0, 1]. Then, the semantic ontology feature model of the retrieval nodes can be expressed as:
$(\bar{x}, \bar{y})$$=\phi_{2}\left(\left(\left(x_{1}, y_{1}\right),\left(\omega_{1}, y_{1}\right)\right), \cdots,\left(\left(x_{N}, y_{N}\right)\left(\omega_{N}, y_{N}\right)\right)\right)$$=\Delta\left(\frac{\sum_{j=1}^{N}\left(\omega_{j}, y_{j}\right)\left(x_{j}, y_{j}\right)}{\sum_{j=1}^{N}\left(\omega_{j}, y_{j}\right)}\right)$ (8)
where, the sum of all weights equals 1; yi falls within [-0.5, 0.5]. Then, the fuzzy decision matrix of L&I resource keyword retrieval was constructed, transforming the retrieval process into a 2-tuple linguistic decision problem. Figure 2 explains the integration of keywords for L&I resource retrieval.
Figure 2. The data integration model for retrieval keywords of L&I resources
3. SVM-Based L&I Resource Classification Model
Figure 3 explains the workflow of L&I resource classification. The rapid development of information technology (IT) has diversified the types and structure of L&I resources. Therefore, the keywords of L&I data need to be defined uniformly, according to the shared features of L&I resources, and prepared into a standardized description template.
Figure 3. The workflow of L&I resource classification
Figure 4 shows the formal description model of L&I resources. The model mainly consists of a basic information description module, an online information description module, an implicit knowledge and utility description module, and a retrieval state description module.
Figure 4. The formal description model of L&I resources
Based on the integration of keywords, an extensible markup language (XML) file of the L&I resources can be formulated from the information provided by the description modules. The term frequency–inverse document frequency (TF-IDF) model was adopted to quantify the retrieval keywords of L&I resources. The word frequency in the TF-IDF model can be expressed as:
$W F(w, K)=\frac{A F(w, K)}{\max A F(K)}$ (9)
where, WF(w, K), AF(w, K), and max AF(K) are the normalized frequency, absolute frequency, and peak frequency of keyword w in the semantic information set K of the L&I resources, respectively. The anti-document frequency can be expressed as:
$A D F(w)=\log \frac{N R}{N_{i}+1}$ (10)
where, NR is the number of L&I resources; ni is the number of resources containing keyword i. From formulas (9) and (10), the weight coefficient of each keyword can be quantified by:
$W E I(w, K)=W F(w, K) \times A D F(w)$ (11)
SVM is a generalized linear classifier with strong generalization ability. This classifier can learn the features of input data, while minimizing structural risks. Considering the advantages of SVM in data mining, this paper applies SVM to classify L&I resources.
In the objective function of SVM, the characteristic parameters have the same variance and the same mean (zero). Therefore, any single feature that does not obey standard normal distribution might dominate the objective function, causing errors in the classification results. To solve the problem, the eigenvalues should be normalized by:
$W E I_{N O R}=\frac{W E I-\overline{W E I}}{W E I_{v a r}}$ (12)
where, the numerator is the difference between WEI and its mean; WEIvar is the variance of WEI.
For the SVM, the keyword set of the L&I resources in the feature space can be described as E={(a1,b1),(a2, b2),…,(aN,bN)}, where ai is an r-dimensional vector in Rr, and bi is the class tag (bi=1, or -1). Then, the classification hyperplane of the feature space can be expressed as:
$b(a)=\phi^{T} a_{i}+c$ (13)
where, c is a constant; φ is an r-dimensional vector. The distance between a data point in the feature space to the hyperplane can be described by the function interval bii((φTai+c)/ǁφǁ). Under the premise of maximizing the interval, the search for the optimal hyperplane can be transformed into the optimization of the following constraint:
$\min _{\phi, c} \frac{1}{2}\|\phi\|^{2} s . t . b_{i}\left(\phi^{T} a_{i}+c\right)-1 \geq 0$ (14)
In the real world, some samples cannot be classified by linear classifiers. That is, the distance of some points to the hyperplane is smaller than 1. Thus, a nonnegative slack variable γi was introduced to make bii(φTai+c)≥1-γi. Adding a penalty to γi, the objective function can be transformed into:
$\frac{1}{2\|\phi\|^{2}}+P \sum_{i=1}^{N} \gamma_{i}$ (15)
where, P is the penalty function (P>0). The greater the P value, the stricter the penalty on misclassification. Formula (15) aims to maximize the interval, i.e. minimizing ǁφǁ, while minimizing the number of misclassified points. Hence, the constraint optimization problem can be rewritten as:
$\min _{\phi, c} \frac{1}{2}\|\phi\|^{2}+P \sum_{i=1}^{N} \gamma_{i}$ s.t. $b_{i}\left(\phi^{T} a_{i}+c\right)-1$$\geq 0$ and $\gamma_{i} \geq 0$ (16)
The optimal solution of the original optimization problem can be obtained by solving the dual problem in the above formula. Then, a positive Lagrangian multiplier τi, υi≥0 was introduced to the above inequality. The Lagrangian function can be defined as:
$L(\phi, c, \gamma, \tau, v)=\frac{1}{2}|| \phi||^{2}$$+P \sum_{i=1}^{N} \gamma_{i}$$-\sum_{i=1}^{N} \tau_{i}\left(b_{i}\left(\phi^{T} a_{i}+c\right)-1+\gamma_{i}\right)$$-\sum_{i=1}^{N} v_{i} \gamma_{i}$ (17)
To obtain the optimal solution to the original problem, the feasible solution of the dual problem needs to satisfy the Karuch-Kuhn-Tucker (KKT) conditions. That is, solving the minimum of the Lagrangian function L relative to φ, c, and γ:
$\left\{\begin{array}{c}\min _{\phi} L(\phi, c, \gamma, \tau, v)=\phi-\sum_{i=1}^{N} \tau_{i} b_{i} a_{i}=0 \\ \min _{c} L(\phi, c, \gamma, \tau, v)=-\sum_{i=1}^{N} \tau_{i} b_{i}=0 \\ \min _{\gamma} L(\phi, c, \gamma, \tau, v)=P-\tau_{i}-v_{i}=0\end{array}\right.$ (18)
The results of formula (18) were simplified and substituted into formula (17). Then, the dual problem can be obtained by maximizing τ value:
$\max _{\tau} \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \tau_{i} \tau_{j} b_{i} b_{j}\left(a_{i} \cdot a_{j}\right)-\sum_{i=1}^{N} \tau_{i}$ s.t. $\sum_{i=1}^{N} \tau_{i} b_{i}$$=0 \quad 0 \leq \tau_{i} \leq P$ (19)
The classification of multi-source L&I resources is a hard nonlinear problem. Here, the nonlinear problem is converted into a linear problem with the Gaussian kernel function:
$G\left(a, a^{\prime}\right)=e^{-\frac{|| a-a^{\prime}||^{2}}{2 \sigma^{2}}}$ (20)
To reduce the heterogeneity of multi-source L&I resources, this paper uses the one-against-one method to set up a binary classifier between any L&I resource samples, thereby building up a multi-class SVM, which outputs the classification decision function based on the input: keyword set of L&I resources E={(a1,b1),(a2,b2),…,(aN, bN)}. The penalty coefficient P and bandwidth σ were selected rationally, and introduced to formula (20). Then, the optimal solution τ*=(τ*1,τ*2,…τ*N) to the optimization problem can be obtained by:
$\max _{\tau} \frac{1}{2} \sum_{i=1}^{N} \sum_{j=1}^{N} \tau_{i} \tau_{j} b_{i} b_{j} e^{-\frac{|| a_{i}-a_{j}||^{2}}{2 \sigma^{2}}}$$-\sum_{i=1}^{N} \tau_{i} s . t . \sum_{i=1}^{N} \tau_{i} b_{i}=0$ $0 \leq \tau_{i}\leq P$ (21)
Taking a positive component τ*j of the optimal solution τ* that is smaller than P, the following formula can be calculated:
$c^{*}=b_{j}-\sum_{i=1}^{N} \tau_{i}^{*} b_{j} e^{-\frac{|| a_{i}-a_{j}||^{2}}{2 \sigma^{2}}}$ (22)
The classification decision function can be established as:
$C D F(x)=\operatorname{sign}\left(\sum_{i=1}^{N} \tau_{i}^{*} b_{j} e^{-\frac{|| a_{i}-a_{j}||^{2}}{2 \sigma^{2}}}+c^{*}\right)$ (23)
4. Knowledge Aggregation Model
Figure 5 shows the proposed knowledge aggregation model of multi-source L&I resources. The complex network of multiple L&I resource platforms has a complex structure, numerous nodes, and diverse connections. Drawing on previous research of the network, this paper proposes a knowledge aggregation model that suits the multi-polar knowledge interaction between platforms.
Figure 5. The knowledge aggregation model of multi-source L&I resources
The evolutionary construction of the platforms is detailed below:
Step 1. Initialize the complex network.
The initial platform only has one platform node node1. Let F1=(μ11,μ22,…,μ1N) be N random fluctuation factors that induce the knowledge interactions across the platform.
Step 2. Form a single-polar local network of multiple platforms.
Triggered by the largest fluctuation factor, the platform node node1 starts to publish the basic information of platform resources and retrieval information. Then, more and more platform nodes emerge, and connect with the existing nodes via the optimal path.
(1) At the beginning of this stage, n1 platform nodes receive the information published by node1, and thus participate in knowledge interaction between platforms. That is, the n1 platform nodes connect with node1. The nodes with weak strength are less likely to connect the other nodes via the optimal path, or to participate in further knowledge interaction. The n1 platform nodes and the d1 additional paths between them form the knowledge interaction network 1.
(2) Suppose n2 platform nodes receive the information, and thus participate in knowledge interaction between platforms. The new nodes choose to connect network 1 via the optimal path. The nodes with weak strength are less likely to connect node i in the network. The n2 platform nodes and the d2 additional paths between them expand the size of network 1.
(3) After t periods, the single-polar local network Net1 of multiple platforms is formed based on the information released by node1. The number of nodes and the number of paths in Net1 can be respectively calculated by:
$N_{n e t 1}=\sum_{j=1}^{t} n_{j}+1$ (24)
$N_{\text {path} 1}=\sum_{j=1}^{t}\left(n_{j}+d_{j}\right)$ (25)
The sum of weights of the paths between two platform nodes is negatively correlated with the path length. The shortest path Dij between nodes i and j can be computed by:
$D_{1 i j}=\frac{1}{\max \left\{\Delta d_{i s}+\ldots+\Delta d_{r i}\right\}}$ (26)
The longer the shortest path, the less frequent the knowledge interaction between two nodes. The mean path length of Net1 can be calculated by:
$L\left(N e t_{1}\right)=\frac{1}{N_{\text {path1 }}\left(N_{\text {path } 1}-1\right)} \sum_{i \neq j \in N e t_{1}} D_{1 i j}$ (27)
In Net1, the concentration of knowledge interaction between node i and another node can be expressed as:
$C O N_{1 i}=\frac{N_{\text {nei1 } i}}{N_{\text {path1 }}\left(N_{ path1}-1\right)}$ (28)
where, Nnei1i is the number of paths between the two nodes. The concentration of knowledge interaction across Net1 can be expressed as:
$\operatorname{CON}\left(\mathrm{Net}_{1}\right)=\frac{1}{N_{\text {net } 1}} \sum_{i \in \text { Net }_{1}} \mathrm{CON}_{1 i}$ (29)
The higher the CON(Net1) value, the more the knowledge interactions across the network.
Step 3. Create the multi-polar knowledge interaction network of multiple platforms.
(1) Multiple local networks can be obtained by repeating Step 2. Suppose k local networks are formed Net1, Net2, …Netk. Let Nnet1, Nnet2, …, Nnetk be the number of nodes in the k local networks, respectively; (βj1,βj2,…,βjNnetj) be the node strength of each platform in local Netj, where j=1, 2, …, k. Since the multiple local networks are connected via node connections, the Nnetj platform nodes in local network Netj can connect other local networks as fluctuation factors. The probability for node u to connect other local networks as fluctuation factor can be computed by:
$P_{n e t}=\frac{\beta_{j u}}{\beta_{j 1}+\beta_{j 2}+\ldots+\beta_{j N e t} j}$ (30)
(2) Multi-polar knowledge interaction takes place between multiple local networks. Through node connections, the multiple local networks form a global network Net. Let N*path be the number of new paths produced through the knowledge interaction between multiple local networks. Then, the mean path length of the global network can be expressed as:
$L(N e t)=\frac{1}{N_{\text {path}}\left(N_{\text {path}}-1\right)} \sum_{i \neq j \in \text {Net}} D_{i j}$ (31)
The number of paths in the global network can be expressed as:
$N_{p a t h}=\sum_{j=1}^{t} N_{p a t h j}+N_{p a t h}^{*}$ (32)
The concentration of knowledge interaction across the global network can be expressed as:
$\operatorname{CON}(\mathrm{Net})=\frac{1}{N_{\text {net}}} \sum_{i \in \text { Net }} C O N_{i}$ (33)
The global network Net encompasses multiple local networks with varied features. Every network, including the global network and each local network, revolve around the platform node with the highest concentration of knowledge interaction to carry out multi-polar knowledge interaction. During the interaction, the interactive relationship between two platforms enhances with the frequency of their common keywords.
5. Experiments and Result Analysis
The following experiments were conducted to test the performance of our method in the collection and aggregation of L&I data. The algorithms were programmed on MATLAB in C++. The RFID tag conversion accuracy was assumed to be 36 bits. The training set and test set include 3,000 and 500 keyword attributes of L&I resources, respectively. The correlation coefficient between different types of keyword attributes was set to 0.25. The context matching degree was set to 0.61.
Figure 6 compares the recalls of the retrieval keywords of L&I resources aggregated by different methods, including our method, time series reconstruction and adaptive balanced retrieval (TSC-ATR), and adaptive screening of distributed structure (ASDS). It can be seen that our method achieved higher accuracy in keyword retrieval of L&I resources than the other two methods. The superior retrieval accuracy comes from the aggregation of high-precision collected data by the RFID tag recognition model based on keyword attributes.
Figure 6. The comparison of the recall
To verify its feasibility and effectiveness, the SVM-based L&I resource classification model was programmed with Spyder compiler. The resource dataset was randomly divided into a training set and a test set. The classification effect of the proposed model was compared with that of mainstream algorithms, namely, the k-nearest neighbors (k-NN) rough set algorithm, the k-modes k-NN, the kernel SVM, and the multi-class SVM (Figure 7; Table 1). The common metrics like accuracy, recall, F1-score, and receiver operating characteristic (ROC) curve were selected to measure the classification effect. It can be seen that, facing the multi-class, high-dimensional L&I resources, our classification model achieved an accuracy of 84.21%, a recall of 85.61%, an F1-score of 86.71%, and a test value of 83.92%. These results are much better than those of other algorithms.
Table 1. The classification effects of different methods
F1-score
Test value
k-NN rough set
K-modes k-NN
Kernel SVM
Multi-class SVM
Our algorithm
Figure 7. The classification results of different methods
Figure 8 presents the learning curves of the proposed SVM-based classification model in the training test and cross-validation test. It can be seen that the training test value was relatively stable, while the cross-validation test value tended to be stable with the growing number of samples, indicating that the learning effect of the model gradually improves. This proves the overall good classification effect of our model.
Figure 8. The learning curves of our model
Figure 9. The ROCs of our model
To disclose the classification effect of our model on each type of L&I resources, the ROCs of our model correctly classifying 100%, 99%, 85%, and 80% of test samples, and the mean ROC are plotted as Figure 9. It can be seen that the mean classification accuracy of our model surpassed 92%, suggesting that our model boasts a good classification effect.
10a.png
Figure 10. The relationship between knowledge aggregation performance and the number of platform nodes
The final task is to verify the effectiveness of the proposed knowledge aggregation model in the complex network of multiple L&I resource platforms. The node strength and mean path length between nodes in global and local networks were tested under the multi-polar knowledge interaction model. Figure 10(a) provides the curve between the number of local network nodes and node strength, and Figure 10(b) displays the curve between the mean path length of the global network and node strength. It can be seen that the node strength of our model obeys the power-law distribution, reflecting the features of node strength distribution of complex networks. This means our knowledge aggregation model has certain credibility. In addition, it was learned that the mean path length slowly increased and then grew linearly, with the growing number of nodes, indicating the knowledge aggregation model adapts to the small-world features of ultrashort mean path length.
This paper introduces the new information service model of big data resources and knowledge services to the processing of L&I data, and constructs a classification model and a knowledge aggregation model for L&I resources based on data mining. Firstly, the resource retrieval keywords were sampled and aggregated, in the light of the data storage structure and relationship model of L&I resource platform. Through experiments, the recalls of the retrieval keywords aggregated by different methods were compared, which verifies the superiority of our method in the collection and aggregation of L&I data. Next, an SVM-based classification model was constructed for L&I resources, and used to extract and quantify the keyword attributes for resource retrieval. Compared with several mainstream methods, the proposed classification model achieved excellent results on the classification of multi-class, high-dimensional L&I resources. Finally, a knowledge aggregation model was constructed for the complex network of multiple L&I resource platforms, and proved to have high credibility and small-world features.
[1] Anggana, S.L., Wahyudi, S.E. (2016). Enhancing university library services with mobile library information system. In Proceedings of Second International Conference on Electrical Systems, Technology and Information 2015 (ICESTI 2015), pp. 545-552. https://doi.org/10.1007/978-981-287-988-2_61
[2] Wang, D., Jia, L. (2016). Study on the information resources sharing mode of library based on network technology. Revista Ibérica de Sistemas e Tecnologias de Informação, E10: 166.
[3] Katuscáková, M., Jasecková, G. (2016). The share of knowledge management subjects within study programmes in the library and information sciences. In European Conference on Knowledge Management, 420.
[4] Pooladian, A., Borrego, Á. (2016). A longitudinal study of the bookmarking of library and information science literature in Mendeley. Journal of Informetrics, 10(4): 1135-1142. https://doi.org/10.1016/j.joi.2016.10.003
[5] Leydesdorff, L., Bornmann, L. (2016). The operationalization of "fields" as WoS subject categories (WC s) in evaluative bibliometrics: The cases of "library and information science" and "science & technology studies". Journal of the Association for Information Science and Technology, 67(3): 707-714. https://doi.org/10.1002/asi.23408
[6] Peset, F., Garzón‐Farinós, F., González, L.M., García‐Massó, X., Ferrer‐Sapena, A., Toca‐Herrera, J.L., Sánchez‐Pérez, E.A. (2020). Survival analysis of author keywords: An application to the library and information sciences area. Journal of the Association for Information Science and Technology, 71(4): 462-473. https://doi.org/10.1002/asi.24248
[7] Iantovics, L.B., Kovacs, L., Fekete, G.L. (2016). Next generation university library information systems based on cooperative learning. New Review of Information Networking, 21(2): 101-116. https://doi.org/10.1080/13614576.2016.1247742
[8] Ziveria, M. (2016). Web based Biblical library information system Lembaga Alkitab Indonesia—Jakarta. In 2016 13th International Joint Conference on Computer Science and Software Engineering (JCSSE), pp. 1-6. https://doi.org/10.1109/JCSSE.2016.7748900
[9] Xiao, M., Zhao, D., Yin, Y., Yu, J. (2016). Bibliometrics course offerings by library and information science programs in China. Education for Information, 32(2): 195-209. https://doi.org/10.3233/EFI-150970
[10] Raflesia, S.P., Surendro, K., Passarella, R. (2017). The user engagement impact along information technology of infrastructure library (ITIL) adoption. In 2017 International Conference on Electrical Engineering and Computer Science (ICECOS), pp. 184-187. https://doi.org/10.1109/ICECOS.2017.8167130
[11] Antoniy, R., Nataliya, K., Vasil, K. (2017). The analysis of the United States of America universities library information services with benchmarking and pairwise comparisons methods. In 2017 12th International Scientific and Technical Conference on Computer Sciences and Information Technologies (CSIT), 1: 417-420. https://doi.org/10.1109/STC-CSIT.2017.8098819
[12] Losee, R.M. (2017). An information theory calculator for understanding information and library science applications. Information, 8(3): 98. https://doi.org/10.3390/info8030098
[13] Tella, A., Babatunde, B.J. (2017). Determinants of continuance intention of Facebook usage among library and information science female undergraduates in selected Nigerian universities. International Journal of E-Adoption (IJEA), 9(2): 59-76. https://doi.org/10.4018/IJEA.2017070104
[14] Jerrett, A., Bothma, T.J., De Beer, K. (2017). Exercising library and information literacies through alternate reality gaming. Aslib Journal of Information Management, 69(2): 230-254. https://doi.org/10.1108/AJIM-11-2016-0185
[15] Sammeta, S.G., Madara, S.R. (2017). Impact of information technologies on library services in educational institutions. In 2017 International Conference on Infocom Technologies and Unmanned Systems (Trends and Future Directions) (ICTUS), pp. 662-668. https://doi.org/10.1109/ICTUS.2017.8286091
[16] Mansour, E. (2017). A survey of digital information literacy (DIL) among academic library and information professionals. Digital Library Perspectives, 33(2): 166-188. https://doi.org/10.1108/DLP-07-2016-0022
[17] Iantovics, L.B., Rotar, C., Nechita, E. (2018). Intelligent university library information systems to support students efficient learning. In International Conference on Neural Information Processing, pp. 193-204. https://doi.org/10.1007/978-3-030-04224-0_17
[18] Kurniasih, N., Kurniawati, N., Yulianti, R.R., Sujito, A., Ikhwan, H.A., Haluti, F., Napitupulu, D. (2018). The utilization of search engines by students of the Library and Information Science Program at Universitas Padjadjaran. JPhCS, 1114(1): 012085. https://doi.org/10.1088/1742-6596/1114/1/012085
[19] Kozlova, E.I., Antoshkova, O.A. (2018). The methodological foundations of standardization in the field of library and information support of science. Scientific and Technical Information Processing, 45(1): 14-21. https://doi.org/10.3103/S0147688218010021
[20] Salisbury, L., Omolewu, A.O., Smith, J.J. (2018). Technology use for non-educational purposes during library instruction: Effects on students learning and retention of information. Science & Technology Libraries, 37(3): 274-289. https://doi.org/10.1080/0194262X.2018.1456391
[21] Supriatna, A.D. (2018). Designing library information system using rapid application development method. MS&E, 434(1): 012259. https://doi.org/10.1088/1757-899X/434/1/012259
[22] Kankonsue, T., Sookruay, T., Gunta-in, S., Srimora, P., Meta, A., Saravudecha, C. (2019). The less-used books management using knowledge management at library and information health science department, Chiang Mai university library. Proceedings of the 2019 2nd International Conference on Intelligent Science and Technology, pp. 24-28. https://doi.org/10.1145/3354142.3354147
[23] Borrego, Á. (2019). The impact of MOOCs on library and information science education. Education for Information, 35(2): 87-98. https://doi.org/10.3233/EFI-190269
[24] Kalenov, N., Sobolevskaya, I., Sotnikov, A. (2019). Hierarchical representation of information objects in a digital library environment. In Russian Conference on Artificial Intelligence, pp. 93-104. https://doi.org/10.1007/978-3-030-30763-9_8
[25] Ammar, N., Bailey, J.E., Davis, R.L., Shaban-Nejad, A. (2020). The personal health library: A single point of secure access to patient digital health information. Stud Health Technol Inform, 448-452.
|
CommonCrawl
|
Roadside air pollution in a tropical city: physiological and biochemical response from trees
Ufere N. Uka ORCID: orcid.org/0000-0002-1920-82371,2,
Ebenezer J. D. Belford2 &
Jonathan N. Hogarh2
The economic growth and social interaction of many developing countries have been enhanced by vehicular transportation. However, this has come with considerable environmental cost. The vehicular emissions of gases such as carbon monoxide (CO), sulphur dioxide (SO2), nitrogen oxide (NOx) and volatile organic compounds (VOC's) among others are associated with vehicular transportation. The resultant effect can lead to respiratory infections in humans, as well as growth inhibition and death of animals and plants. An investigation was conducted to evaluate the impact of vehicular air pollutants on some selected roadside tree species in the Kumasi Metropolis, Ghana. Ficus platyphylla, Mangifera indica, Polyalthia longifolia and Terminalia catappa, which were abundant and well distributed along the road sides, were selected for the study. Three arterial roads in the Kumasi Metropolis, namely Accra Road (Arterial I), Offinso Road (Arterial II) and Mampong Road (Arterial III), were considered as different traffic volumes experimental sites. The KNUST campus was selected as a control site. Diurnal analysis of CO, NO2, SO2 and VOC was monitored in the sample sites. Three replicates of each tree species were defined at a distance 10 m away from the edge of the road. Physiologically active leaves (20 to 25) from each tree species replicate were harvested for physiological and biochemical determination.
The ambient air quality data showed higher levels at the arterial road sites, which were severely polluted based on air quality index. The biochemical studies revealed reductions in leaf total chlorophyll and leaf extract pH whilst leaf ascorbic acid and relative water contents increased at the arterial road sites.
It was found that the plants' tolerant response level to vehicular air pollution was in the order T. catappa > F. platyphylla > M. indica and P. longifolia. Based on anticipated performance index, it was revealed that M. indica, F. platyphylla and T. catappa might be performing some level of air cleaning functions along the arterial roads. Whilst P. longifolia was poor and unsuitable as a pollution sink.
Vehicular emissions in developed countries have been largely controlled by improvement on vehicle parts and fuel content. However, such cannot be said of developing countries, where many old and poorly maintained vehicles ply the roads, coupled with the use of poor grade quality fuel. Transportation which is associated with the burning of diesel and gasoline in automobiles has high consideration as a source of air pollution, both at regional and global levels. Motor vehicles discharge a large amount of exhaust emission like carbon monoxide (CO), sulphur dioxide (SO2), nitrogen oxide (NOx), volatile organic compounds (VOCs) and particulate matter that represent 60–70% of the air contamination found in an urban area (Dwivedi and Tripathi 2007). The principal pollutants emitted from gasoline-fuelled vehicles are CO, hydrocarbons (HC), and NOx while particulate matter, NOx, SO2 and polyaromatic hydrocarbons (PAH) are emitted by diesel-fuelled vehicles (Bhandarkar 2013).
There is considerable proof to buttress the possibility of plants, particularly trees to function as sinks for gaseous pollutants. Pollution removal by plants occur either through deposition on plant surfaces and/or stomatal uptake. According to Nowak and Crane (2000), short-term air quality enhancement in urban territories with trees cover were 14% sulphur dioxide, 8% nitrogen dioxide, 0.05% carbon monoxide and 15% ozone. Air pollutants on entering the plants through the stomata undergo complex interactions within the cells leading to series of reactions, some of which enhances the capacity of the plant to adapt to the stress (Mittler 2002), and invariably show diverse morphological, biochemical, anatomical and physiological responses.
The biochemical and physiological reactions aid the plant species in pollution tolerance development against air pollution. The plants response to air pollutants is hypothetically measured using air pollution tolerance index (Singh and Rao 1983). This index is related to plant leaves ascorbic acid, relative water content, leaf pH and total chlorophyll. The change of these parameters reveals the sensitivity and plants' tolerance to air contamination. The distinguishing proof and arrangement of plants into delicate and tolerant groupings is vital on the grounds that the former can fill in as markers and the latter as sinks for the decrease of air contamination (Singh et al. 1991).
Trees are subjected to these emissions because of its stationary nature. The activity of the major physiological processes in the leaf makes it the most susceptible part to be influenced by air pollutants. Research on Air Pollution Tolerance Index (APTI) and Anticipated Performance Index (API) had been done in India (Gupta et al. 2011; Pathak et al. 2011). However, studies on reaction of tree plants in view of APTI and API are yet to be carried out in other climes of the world and studies conducted from different nations are limited, for example, Iran (Esfahani et al. 2013) and Nigeria (Ogunkunle et al. 2015).
In Ghana, traffic intensity is high in many metropolitan areas. Unfortunately, the extent of vehicular air pollution levels in these cities is mostly not monitored. Such information, nevertheless, is necessary in controlling air pollution and to provide baseline studies on the air pollution in various metropolises in the country. Furthermore, knowledge of the plants that is able to tolerate vehicular air pollution and act as a sink for the toxic gases would be instrumental in controlling air pollution along major roads, especially those with heavy traffic and increased vehicular emissions. The findings from this study will contribute immensely towards developing effective measures for controlling air pollution in fast growing tropical metropolis, as well as provide vital information of tolerant plants species.
Kumasi is the second biggest city in Ghana. It is situated around 270 km north of the national capital, Accra, 397 km south of Tamale (Northern Regional capital) and 120 km south-east of Sunyani (Brong Ahafo Regional capital). Kumasi is situated between latitude 6.6666° N and longitude 1.6163° W and has a land range of 254 km2. The minimum temperature in the area is around 21.5 °C with the maximum temperature of 33.7 °C. There are seven major arterial roads leading into and out of the Metropolis, out of which three major roads were selected (Fig. 1). Kumasi-Accra (Arterial road I), Kumasi-Offinso (Arterial II) and Kumasi-Mampong roads (Arterial III) were selected for sampling because these major roads experience extreme congestion using average vehicle speed as a parameter (Anin et al. 2013). These major arterial roads: Accra, Offinso and Mampong roads representing extreme, heavy and severe traffic congestion traffic flows respectively were considered as experimental arterial road sites, while Kwame Nkrumah University of Science and Technology Campus with normal traffic flow was selected as a control site.
Map of the Kumasi Metropolis with road networks
Air quality analysis
The diurnal analysis of CO, NO2, SO2 and VOC was monitored at the sampling sites using Aeroqual Series 500 (S500) gas monitors (Aeroqual Limited, Auckland, New Zealand). The ambient air quality at each site was monitored for 6 days in 1 week at each site. The quality rates of CO, SO2, NO2 and VOCs at the study sites were calculated using the equation adopted by Chattopadhyay et al. (2010):
$$ Q\kern0.5em =\kern0.5em \frac{100\kern0.5em \times \kern0.5em V}{V\mathrm{s}} $$
Where Q = quality ratings, V = observed value and Vs = permissible threshold value. The air quality rates of the four air pollutants (CO, SO2, NO2 and VOC) were used to calculate air quality index by determining their geometric mean. The geometric mean (g) (anti log {(QCO+ QSO2 + QNO2 + QVOC)/4}) of their quality rating were computed and taken as the AQI.
Tree species and collection of samples
Four tree species—Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia—were commonly identified along key arterial roads in the Kumasi Metropolis namely Accra Road (Arterial I), Offinso Road (Arterial II) and Mampong Road (Arterial III), as well as the Kwame Nkrumah University of Science and Technology campus road, which served as the control site. Leaf samples were collected fortnightly from these trees in the months of August–November, 2015 before the onset of the harmattan season in December when trees shed their leaves. Three replicates of each tree species with diameter at breast height (DBH) greater than 10 cm and height between 5 and 10 m were sampled at a distance of 10 m away from the edge of the road. The distances between each tree species replicate ranged from 2 to 4.8 km along each road.
Twenty to 25 physiologically active leaves, third from the tip of the apical bud, were harvested from the side of the tree facing the road between 07:00 and 09:00 h for morpho-physiological and biochemical properties determination. Samples for biochemical analysis were stored at − 40 °C until used, while samples for morpho-physiological characteristics were processed immediately.
Determination of physiological and biochemical parameters
The total relative leaf water content was determined according to the method described by Liu and Ding (2008). The estimation of leaf-extract pH was determined according to the method described by Singh and Rao (1983), while the technique of Keller and Schwager (1977) was adopted for the determining ascorbic acid.
Chlorophyll and carotenoid were analysed using standard spectrophotometric procedure (Arnon 1949; Wellburn 1994; Joshi and Swami 2009). Leaf samples (3 g) were weighed and homogenised in 10 ml of 80% acetone solution for 15 min. The homogenate was centrifuged at 2500 rpm for 3 min. Pigment absorbance values in supernatant were measured against a blank using CECIL 8000 UV-visible spectrophotometer at wavelengths 645 nm, 663 nm and 480 nm. The chlorophyll and carotenoid contents were determined as follows:
Chlorophyll a = 12.7 (A663) − 2.69 (A645) × V/1000 × W mg/g
Chlorophyll b = 22.9 (A645) − 4.68 (A663) × V/1000 × W mg/g
Total chlorophyll = 20.2 (A645) − 8.02 (A663) × V/1000 × W
Carotenoids = A480 + 11.4 (A663) − 6.38 (A645 nm) × V/1000 × W
Where A = absorbance of the extract, V = total volume of the chlorophyll solution (ml) and W = weight of the tissue extract (g).
Evaluation of tolerance and sensitivity of tree species to vehicular emissions
Leaf extract pH, relative water content (RWC), total chlorophyll content and ascorbic acid were used to determine (Thawale et al. 2011) tolerance and sensitivity of the tree species to vehicular emissions. They were taken as numerical expression to get an observed value indicating Air Pollution Tolerance Index (APTI) as proposed by Singh and Rao (1983).
APTI is given as: APTI = [AA (T + P) + R]/10; where AA = ascorbic acid, P = pH, T = total chlorophyll content, R = relative water content.
APTI values of tree species obtained were grouped into distinct tolerance levels according to two categorisation methods. The first categorisation approach followed the works of Thakar and Mishra (2010), by comparing the APTI value of each tree species with the mean APTI value of all the studied tree species alongside with its standard deviation (SD), thus the following classification:
Tree species APTI higher than mean APTI + SD = Tolerant
Tree species APTI value between mean APTI and mean APTI + SD = Moderately tolerant
Tree species APTI value between mean APTI-SD and mean APTI = Intermediate
Tree species APTI lower than the mean APTI = Sensitive
In the second approach, APTI values of tree species obtained were categorised according to Padmavathi et al. (2013) classification:
APTI value above 17 = Tolerant
APTI between 12 and 16 = Intermediate
Less than 12 = Sensitive
Anticipated Performance Index of studied tree species
On account of APTI value and some important biological and socio-economic characters, the Anticipated Performance Index (API) was determined for each tree species. A grading point was assigned to each tree species based on the method of Tiwari et al. (1993) with modification (Table 1). On the basis of the grading system, a tree can obtain a maximum point of 16 which can be expressed in percentages. The percentage obtained is then used to determine the API score category of the plant species in it use for urban greenery (Table 2).
Table 1 Tree species gradation based on air pollution tolerance index, morphological parameters and socio-economic importance
Table 2 Anticipated Performance Index (API) of Tree species
One-way analysis of varaiance (ANOVA) was conducted to test for differences in plant morphological and biochemical features among the different roads; each time ANOVA revealed significant difference (p < 0.05), a multiple comparison of the means using Turkey HSD test was performed. Regression analysis was carried out between independent variables namely chlorophyll, pH, RWC, ascorbic acid and dependent variable such as APTI.
Air quality analysis in the sampled sites
The minimum, maximum and mean daylight concentrations of CO, SO2, NO2 and VOC measured at the major arterials roads and the control site in the Kumasi Metropolis are presented in Table 3. The concentrations of the various ambient air pollutants were greater at the arterial roads compared to quite minimal levels recorded at the control site; the difference in mean values among the arterial roads and control sites was statistically significant for CO and SO2 (p < 0.05), but not significant for NO2 and VOC (Table 3).
Table 3 Ambient air quality of Kumasi Metropolis during the study period
Effect of vehicular air pollution on relative water content
Relative water content of leaf samples of all the four tree species at the arterial road sites were higher but not significantly different from those at the control site (p = 0.41) (Table 4). Relative water content of leaf samples at the arterial road sites ranged between 68.38 and 93.86%, whilst those at the control site were ranging from 64.42 to 79.94%. Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia had higher relative water content at the arterial road sites than at the control site.
Table 4 Effect of vehicular air pollution on relative water content (%) of four street tree species in the Kumasi Metropolis
Leaf extract pH
Leaf extract pH of leaf samples of all the four tree species at the arterial road sites were lower than and significantly different from those at the control site (p = 0.000). Leaf extract pH of leaf samples at the arterial road sites were more acidic ranging from 5.08 to 5.9 whilst those at the control site were slightly acidic ranging from 6.15 to 6.75 (Table 5).
Table 5 The effect of vehicular air pollution on leaf extracts pH of selected tree species in the Kumasi Metropolis
The ascorbic acid content of all the four tree species at the arterial road sites were higher and significantly different than those of the control site (p = 0.000). The mean concentration of ascorbic acid ranged from 12.09 mg/g in Polyalthia longifolia at Arterial road II to 19.81 mg/g in Terminalia catappa at the Arterial road I, while at the control site, ascorbic acid content range from 10.91 mg/gin Polyalthia longifolia to 14.38 mg/g in Mangifera indica (Table 6).
Table 6 The effect of vehicular air pollution on ascorbic acid contents (mg/g) of selected tree species in the Kumasi Metropolis
Total chlorophyll
The total chlorophyll content of all the four tree species at the arterial road sites were lower and significantly different than those of the control site (p = 0.000). The mean concentration of total chlorophyll ranged from 0.53 mg/g in Terminalia catappa at Arterial road I to 1.13 mg/g in Mangifera indica at the Arterial road III, while at the control site, total chlorophyll content range from 1.21 mg/g in Terminalia catappa to 1.53 mg/g in Mangifera indica (Table 7).
Table 7 The effect of vehicular pollution on total chlorophyll (mg/g) contents of selected tree species in the Kumasi Metropolis
Carotenoid
The carotenoid content of leaf samples of all the four tree species at the arterial road sites were lower than and significantly differed from those at the control site except for Polyalthia longifolia (p < 0.05). The mean concentration of carotenoid ranged from 0.11 mg/g in Terminalia catappa collected from Arterial road III and to 0.17 mg/g in Polyalthia longifolia collected from Arterial road I. Whilst at the control site, carotenoid content ranged from 0.17 mg/g in Mangifera indica to 0.19 mg/g in Terminalia catappa and Ficus platyphylla (Table 8).
Table 8 The effect of vehicular air pollution on of carotenoids (mg/g) contents of selected tree species in the Kumasi Metropolis
Relationship between the ambient air quality and biochemical properties of the selected tree species
The relationship between ambient air quality and biochemical properties of Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia was investigated using the multiple regression analysis. Multiple regression analysis resulted in a significant relationship between the determined air pollutants (CO, SO2, NO2 and VOC) as independent or predictive variables and the biochemical parameters (relative water content, leaf extract pH, ascorbic acid, total chlorophyll and carotenoid) as dependent variables. The result is presented using the Pareto chart of t values for the regression coefficients (Fig. 2). SO2 pollution was significant as a predictive variable for total chlorophyll and pH; total chlorophyll and ascorbic acid in Mangifera indica, Ficus platyphylla and Polyalthia longifolia respectively (Fig. 2b–d). CO pollution related significantly with ascorbic acid content in M. indica (Fig. 2b), whilst NO2 pollution was significant as a predictive variable for pH and total chlorophyll content in Ficus platyphylla (Fig. 2c).
a Pareto chart of t values between ambient air quality and biochemical parameters of Terminalia catappa. b Pareto chart of t values between ambient air quality and biochemical parameters of Mangifera indica. c Pareto chart of t values between ambient air quality and biochemical parameters of Ficus platyphylla. d Pareto chart of t values between ambient air quality and biochemical parameters of Polyalthia longifolia
Air Pollution Tolerance Index for selected tree species
In calculating Air Pollution Tolerance Index, the ascorbic acid, total chlorophyll, pH of leaf extract and relative water content were used in the assessment of level of tolerance to vehicular pollution (Table 9). The mean Air Pollution Tolerance Index (APTI) value of all the four tree species from the four study sites ranged from 15.69 to 20.52 with an overall mean APTI value of 18.37 and standard deviation of 1.33. The total mean APTI value of each tree species are as follows: Terminalia catappa (19.76); Ficus platyphylla (19.16); Mangifera indica (18.78) and Polyalthia longifolia (17.60).
Table 9 Air pollution tolerance index (APTI) and classification for selected tree species in the Kumasi Metropolis
The Pearson correlation values presented in Table 10 shows the association of the four biochemical parameters with the dependent parameter APTI.
Table 10 Correlation matrix between the APTI values and some studied parameters
Assessment of Anticipated Performance Index of the selected tree species
The gradation of the four studied tree species based on air pollution tolerance, morphological parameters and socio-economic importance is presented in Table 11. The tree species that suited into the grading model as regards to their Anticipated Performance Index (API) were proposed for green belt improvement. Utilising the Anticipated Performance Index score class given in Table 2. Scores of the various studied tree species disclosed that Mangifera indica and Ficus platyphylla were assessed to be very good performers. Terminalia catappa was evaluated to be a good performer, while Polyalthia longifolia was identified as a poor performer.
Table 11 Evaluation of tree species gradation based on APTI, morphological parameters and socio-economic importance
Air quality in selected sampling sites
A higher concentration of CO, SO2, NO2 and VOC values were recorded at the arterial road sites in comparison to the control. It has been reported by Saxena et al. (2012) that in the urban areas, traffic flow is among the foremost emission sources. Thus, the three arterial roads could had higher pollutant vehicular air pollutant levels than the control site.
In this study, Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia had higher relative water content at the experimental sites than at the control site. Similar result was obtained by Jyothi and Jaya (2010). Thus, the higher relative water content at the arterial road sites might be responsible for normal functioning of biological processes in these tree plants. Under the condition of stress, high relative water content inside a plant's organs will keep up its physiological equilibrium.
pH signals the occurrence of detoxication process in plant necessary for tolerance (Thawale et al. 2011). Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia leaf extract pH in this study were found to be acidic in nature at the arterial road sites. Similar observation was reported in Gladiolus gandavensis (Swami et al. 2004). Low pH values is an indication of sensitivity of the plant species to air pollutants, while high pH could provide tolerance to pollutants (Govindaraju et al. 2012; Saxena et al. 2012). Plants exposed to air pollutants (specifically, SO2) generate substantial H+ to react with SO2, which enters through the stomata, resulting H2SO4 and lowering of leaf pH (Zhen 2000). It has been reported that higher leaf extract pH values leads to higher plants absorption of SO2 and NOx (Zou 2007). In study, lower pH values were recorded at the arterial roads, where SO2 values were higher; this characteristics suggest that leaf extract pH could be used as an indicator for vehicular air pollution.
Ascorbic acid is an antioxidant commonly found in growth plants parts that depicts its resistance to air pollution (Pathak et al. 2011). In this investigation, ascorbic acid in the leaves of the studied plants were higher at the arterial road sites with respect to the control site in Terminalia catappa, Mangifera indica, Ficus platyphylla and Polyalthia longifolia. This is in agreement with the reports of Nwadinigwe (2014) and Rai et al. (2013). However Rai and Panda (2014) reported higher ascorbic acid at the control site and reduced ascorbic acid at the experimental sites. The increased in ascorbic level reported in these tree species suggests their tolerance to the pollutants especially automobile exhausts and a defence mechanism of the respective plants. Previous studies have shown that ascorbic acid reduces reactive oxygen species (ROS) concentration in leaves, thus higher ascorbic acid content of a plant is a sign of its tolerance against SO2 pollution (Jyothi and Jaya 2010; Varshney and Varshney 1984).
It was observed in this study that photosynthetic pigments in the tree species leaves were lowered with higher concentration of vehicular air pollutants in the arterial roads lends credence to Tripathi and Gautam's (2007) assertion that chloroplast is the first site of attack by vehicular air pollutants which consist of SPM, SO2 and NOX. This is in agreement with earlier studies (Wei et al. 2014). Air pollutants gain entry into the tissues across the stomata and partially denaturises the chlorophyll, thus a decrease pigment content in the polluted leaves cells (Pant and Tripathi 2012).
The reduction in chlorophyll has been credited to the interruption of the chloroplast layer because of SO2 phytotoxic nature (Winner et al. 1985) bringing about leaching of pigment (Rath et al. 1994). The promotion of secondary processes which breakdown chlorophyll and kills the cells is believed to be associated with SO2. Acidic polllutants like SO2 brings about phaeophytin formation by acidification of chlorophyll brings about reduction of leaf chlorophyll (Jyothi and Jaya 2010). Similar reduction in the photosynthetic pigment were observed in other studies (Mandal and Mukherji 2000; Wagh et al. 2006; Joshi and Swami 2009; Chauhan 2010). The study on cyto-architectonics' destruction of Cucurbita moschata under SO2 and NO2 stress showed a damaged chloroplast and mesophyll cells caused by air pollution (Ding and Lei 1987).
In this study, carotenoids content of the tree leaves species sampled at the arterial road sites were lower in comparison to the control site. Chauhan (2010) reported that carotenoids are sensitive to SO2. Since SO2 is a by-product of vehicular air pollution, it is suggested that this pollutant could have caused the reduction of carotenoid content of the leaves of the studied species at the road sites. Several researchers had reported that carotenoid content reduced under air pollution (Sharma and Tripathi 2009; Tripathi and Gautam 2007; Verma and Singh 2006). The decrease in carotenoid contents of the tree species leaves at the arterial road sites agrees with Joshi and Swami (2009) that vehicular emission or vehicle-induced air pollution reduced photosynthetic pigments in tree leaves exposed to roadside pollution.
It was observed that the independent variable volatile organic compounds (VOCs) in all the tree species did not relate significantly to the dependent biochemical variables, which suggested that the dependent variable (VOC) may not be used to predict changes in the biochemical variables. It was not surprising from the result that not all the four biochemical variables (pH, relative water content, ascorbic acid and total chlorophyll) used in the computation of Air Pollution Tolerance Index (APTI) were significantly related to a single air pollutant. This reveals that APTI gives a general idea of pollution tolerance of plant rather than indicating its specific tolerance to air pollution and as such does not differentiate between various air pollutants.
In the present study, all the four tree species had mean APTI value of more than 17. Therefore, it is considered as tolerant to vehicular air pollution. Plants with higher APTI qualities were more tolerant to air pollution than those with low APTI values; those with low APTI qualities are sensitive plants and may go about as bio-pointers of air contamination (Shannigrahi et al. 2004; Chandawat et al. 2011). Hence, on the premise of their indices, different plants might be classified into tolerant, moderately tolerant, intermediate and sensitive plants (Chandawat et al. 2011).
Thakar and Mishra's (2010) approach is effective in the identification of comparatively tolerant species by comparing the tolerance grades between the plant species under the same environment irrespective of how tolerant the investigated species is. Whilst, Padmavathi et al.'s (2013) approach is useful in the selection of the true tolerant plant species using three absolute APTI Index values in spite of the environmental conditions (Zhang et al. 2016). The combination of the tolerance results of the tree species based on the two approaches give a better tolerance evaluation. Tree species classified as tolerant based on Padmavathi approach could be used for urban greenery. But if classified as sensitive or intermediate based on Padmavathi approach, a reassessment as opposed to Thakar and Mishra method should be carried out. For instance, Polyalthia longifolia was classified as intermediately tolerant in Arterial road II and control site using Padmavathi's approach; on re-categorisation, this tree species were sensitive to the vehicular air pollution. Hence, its consideration for usage in urban greening should be the least priority. However, other tree species in the arterial road sites were moderately tolerant and/or tolerant based on the two approaches, thus they are highly recommended for urban greening. The consideration of these tree species for urban greenery stems from the fact that they are tolerant and moderately tolerant at least two or three of the studied sites. Zhang et al. (2016) opined that plant species with tolerant and moderately tolerant grades may be applied in green belt planning for urban and suburb areas. It was also observed that tree species had different tolerant grades at different study sites with different classification. This could be as a result of differentials in air pollution and other environmental factors that may have influenced the four parameters in the APTI formula.
The association of the four biochemical parameters amongst them and with the dependent parameter APTI was illustrated in this study which suggests that total chlorophyll and ascorbic acid are the determinants on which the tolerance of the tree species depends on in Arterial road I. It also suggested that relative water content and total chlorophyll are the determinant of tolerance in the studied tree species in Arterial road II, whilst relative water content and ascorbic acid are the determinants of tolerance in the studied tree species in Arterial road III. At the control site, it was indicated that total chlorophyll and ascorbic acid are the most significant determining factors on which tolerance of tree species is dependent on.
qqqqqEnvironmentalists have consistently advocated for urban greenery in urban areas and roadsides as well. Green belts naturally cleanse the atmosphere by absorption, diffusion of gaseous and particulate pollutant through their leaves that function as efficient pollutant trapping device (Thambavani and Prathipa 2012). Anticipated Performance Index is an evaluating framework where a tree species is graded in view of air pollution tolerance index, morphological characteristics and alongside socio-economic parameters. In the evaluating framework, a tree gets a greatest score of 16 points, which are scaled to rates and in the light of the score got; the class is determined. From this study, Mangifera indica and Ficus platyphylla were considered under very good category are highly recommended for planting as urban tree for auto exhaust mitigation. These tree species possess dense canopy of evergreen leaves as well as economic values. It has been reported also that Mangifera indica is fast growth tree and stores high amount of carbon in its tissues, thus its high priority rating (Miria and Khan 2013). Terminalia catappa was judged to be a good performer. Polyalthia longifolia was found to be unsuitable as a pollution sink because of its low anticipated performance.
In the present study, all the four tree species had mean APTI value of more than 17; hence, which are tolerant to vehicular pollution. The tolerant response to vehicular pollution in the study area was as follows: Terminalia catappa > Ficus platyphylla > Mangifera indica and Polyalthia longifolia. These trees were classified as intermediately tolerant in Arterial road II and control site using Padmavathi's approach; on re-categorisation, this tree species were sensitive to the vehicular air pollution. Hence, its consideration for usage in urban greening should be the least priority. However, other tree species in the arterial road sites were moderately tolerant and/or tolerant based on the two approaches, thus they are highly recommended for urban greening. Anticipated Performance Index revealed that M. indica and Ficus platyphylla were very good performers. Whilst Terminalia catappa was good performer and such could be planted along the roadsides for mitigation of auto exhaust pollution. Polyalthia longifolia was rated poorly and unsuitable as a pollution sink. The APTI/API was important for better air quality management and for the selection of suitable tree species for roadsides. This could be a strategy for reduction of air pollution in the Metropolis. It cannot be claimed that green belt plantation along the roads brings about total removal of air pollutants; however, it might potentially remove the toxic pollutants in substantial amounts.
AA:
ANOVA:
Anticipated Performance Index
APTI:
Air pollution Tolerance Index
CO:
HC:
NOX:
PAH:
Poly aromatic hydrocarbons
RWC:
Relative water content
SO2:
VOC:
Anin EK, Annan, J, Alexander OF (2013) Assessing the causes of urban transportation challenges in the Kumasi Metropolis of Ghana. American Based research Journal 2(6):1–12.
Arnon DI (1949) Copper enzyme in isolated chloroplasts. Polyphenoloxidase in Beta vulgaris. Plant Physiol 24:1–15
Bhandarkar S (2013) Vehicular pollution, their effect on human health and mitigation measures. Vehicle Eng 1(2):33–40
Chandawat DK, Verma PU, Solanki HA (2011) Air pollution tolerance index (APTI) of tree species at cross road of Ahmadabad city. Life Sci Leaflets 20:935–943
Chattopadhyay S, Gupta S, Saha RN (2010) Spatial and temporal variation of urban air quality: a GIS approach. J Environ Prot 1:264–277
Chauhan A (2010) Photosynthetic pigment changes in some selected trees induced by automobile exhaust in Dehradun. J New York Sci 3(2):45–51
Ding QX, Lei HL (1987) Influence of soot dust on the pumpkin leaves external injury and internal structure. Environ Study Monit S2:39–41 (In Chinese)
Dwivedi AK, Tripathi BD (2007) Pollution tolerance and distribution pattern of plants in surrounding area of coal-fired industries. J Environ Biol 28:257–263
Esfahani A, Amini H, Samadi N, Kar S, Hoodaji M, Shirvani M, Porsakhi K (2013) Assesment of air pollution tolerance index of higher plants suitable for green belt development in east of Esfahan city, Iran. JOHP 3(2):87–94
Govindaraju M, Ganeshkumar RS, Muthukumaran VR, Visvanathan P (2012) Identification and evaluation of air-pollution-tolerant plants around lignite-based thermal power station for green belt development. Environ Sci Pollut Res 19:1210–1223
Gupta S, Nayek S, Bhattacharya P (2011) Effect of air-borne heavy metals on the biochemical signature of tree species in an industrial region, with an emphasis on anticipated performance index. Chem Ecol 27(4):381–392
Joshi PC, Swami A (2009) Air pollution induced changes in the photosynthetic pigments of selected plant species. J Environ Biol 30(2):295–298
Jyothi SJ, Jaya D (2010) Evaluation of air pollution tolerance index of selected plant species along roadsides in Thiruvananthapuram, Kerala. J Environ Biol 31:379–386
Keller J, Schwager H (1977) Air pollution and ascorbic acid. Env. J. Forests Pathol; 7:338–350.
Liu YJ, Ding H (2008) Variation in air pollution tolerance index of plants near a steel factory: implication for landscape-plant species selection for industrial areas. WSEAS Trans Environ Dev 4:24–32
Mandal M, Mukherji S (2000) Changes in chlorophyll content, chlorophyllase activity, Hill reaction, photosynthetic CO2 uptake, sugar and starch contents in five dicotyledonous plants exposed to automobile exhaust pollution. J Environ Biol 21(1):37–41
Miria A, Khan AB (2013) Air pollution tolerance index and carbon storage of select urban trees—a comparative study. Int J Appl Res Stud 2:1–7
Mittler R (2002) Oxidative stress, antioxidants and stress tolerance. Trends Plant Sci 7(9):405–410
Nowak DJ, Crane DE (2000) The urban forest effects (UFORE) model: quantifying urban forest structure and functions. In: Hansen M, Burk T (eds) Integrated tools for natural resources inventories in the 21st Century. USDA Forest Service General Technical Report NC-212, St. Paul, pp 714–720
Nwadinigwe A (2014) Air pollution tolerance indices of some plants around Ama industrial complex in Enugu state, Nigeria. Afr J Biotechnol 13:1231–1236
Ogunkunle CO, Suleiman LB, Oyedeji S, Awotoye OO, Fatoba PO (2015) Assessing the air pollution tolerance index and anticipated performance index of some tree species for biomonitoring environmental health. Agrofor Syst 89(3):447–454
Padmavathi P, Cherukuri J, Reddy MA (2013) Impact of air pollution on crops in the vicinity of a power plant: a case study. Int J Eng Res Technol 12(2):3641–3651
Pant PP, Tripathi AK (2012) Effect of Lead and Cadmium on morphological parameters of Syzygium Cumini Linn seedling. Indian Journal of Science, 1(1):29–31.
Pathak V, Tripathi BD, Mishra VK (2011) Evaluation of anticipated performance index of some tree species for green belt development to mitigate traffic generated noise. Urban For Urban Green 10(1):61–66
Rai PK, Panda LLS (2014) Dust capturing potential and air pollution tolerance index (APTI) of some road side tree vegetation in Aizawl, Mizoram, India: an Indo-Burma hot spot region. Air Qual Atmos Health 7:93–101
Rai PK, Panda LLS, Chutia BM, Singh MM (2013) Comparative assessment of air pollution tolerance index (APTI) in the industrial (Rourkela) and non industrial area (Aizawl) of India: an eco-management approach. Glob J Environ Sci Technol 1(1):027–031
Rath S, Padhi SK, Kar MR, Ghosh PK (1994) Response of zinniato sulphur dioxide exposure. Ind J Ornamental Hort 2(1&2):42–45
Saxena P, Bhardwaj R, Ghosh C (2012) Status of air pollutants after implementation of CNG in Delhi. Curr World Environ 7(1):109–115
Shannigrahi AS, Fukushima T, Sharma RC (2004) Anticipated air pollution tolerance of some plant species considered for green belt development in and around an industrial/urban area in India. An overview. Int J Environ Stud 61:125–137
Sharma AP, Tripathi BD (2009) Biochemical response in tree foliage exposed to coal-fired power plant emission in seasonally dry tropical environment. Environ Monit Assess 158:197–212
Singh SK, Rao DN (1983) Evaluation of plants for their tolerance to air pollution. In: Proceedings of the Symposium on Indian Association for Air Pollution Control, New Delhi, pp 218–224
Singh SK, Rao DN, Agrawal M, Pande J, Narayan D (1991) Air pollution tolerance index of plants. J Env Manag 32:45–55
Swami A, Bhatt D, Joshi PC (2004) Effects of automobile pollution on sal (Shorea robusta) and rohini (Mallotus phillipinensis) at Asarori, Dehradun. Himalayan J Environ Zool 18(1):57–61
Thakar B, Mishra P (2010) Dust collection potential and air pollution tolerance index of tree vegetation around Vedanta aluminium limited. Jharsuguda The Bioscan 3:603–612
Thambavani SD, Prathipa V (2012) Biomonitoring of air pollution around urban and industrial sites. J Res Biol 1:007–014
Thawale PR, Satheesh BS, Wakode RR (2011) Biochemical changes in plant leaves as a biomarker of pollution due to anthropogenic activity. Environ Monit Assess 177(1-4):527–535
Tiwari S, Bansal S, Rai S (1993) Assessment of air pollution tolerance of two common tree species against sulphar dioxide. Biome 6(2):78–82
Tripathi AK, Gautam M (2007) Biochemical parameters of plants as indicators of air pollution. J Environ Biol 28(1):127–132
Varshney SRK, Varshney CK (1984) Effects of sulphur dioxide on ascorbic acid in crop plants. Environ Pollut 35:285–291
Verma A, Singh SN (2006) Biochemical and ultrastructural changes in plant foliage exposed to auto-pollution. Environ Monit Assess 120:585–602
Wagh ND, Shukla PV, Tambe SB, Ingle ST (2006) Biological monitoring of roadside plants exposed to vehicular pollution in Jalgaon city. J Environ Biol 27(2):419–421
Wei SZ, Yuan HJ, Zan YL (2014) Research on the SO2 resistance in three kinds of foliage plants. Chin Agric Sci Bull 30(4):152–156 (In Chinese with English abstract)
Wellburn AR (1994) The spectral determination of chlorophylls a and b, as well as total carotenoids, using various solvents with spectrophotometers of different resolution. J Plant Physiol 144(3):307–313
Winner WE, Mooney HA, Goldstun RA (1985) Sulphur dioxide and vegetation: Physiology, ecology, and policy issues. Stanford, California: Stanford Univ. Press; pp. 593
Zhang P, Liu Y, Chen X, Yang Z, Zhu M, Li Y (2016) Pollution resistance assessment of existing landscape plants on Beijing streets based on air pollution tolerance index method. Ecotoxicol Environ Saf 132:212–223
Zhen SY (2000) The evolution of the effects of SO2 pollution on vegetation. Ecol Sci 19(1):59–64 (In Chinese with English abstract)
Zou XD (2007) Study on the air cleaning effects of urban greenspace system. Shanghai Jiao Tong Univ. (In Chinese with English abstract)
Ufere N.Uka was grateful to Ebonyi State University for granting him Postgraduate Scholarship award sponsored by the Tertiary Education Trust Fund (TETFUND), Abuja-Nigeria.
Tertiary Education Trust Fund (TETFUND), Abuja-Nigeria was responsible for my fees payment and upkeep.
All the data obtained during the study are presented in this manuscript. Any further enquiries for additional information are available upon request from the corresponding author.
Department of Applied Biology, Faculty of Science, Ebonyi State University, P.M.B 048, Abakaliki, Ebonyi State, Nigeria
Ufere N. Uka
Department of Theoretical and Applied Biology, Kwame Nkrumah University of Science and Technology, Kumasi, Ghana
Ufere N. Uka, Ebenezer J. D. Belford & Jonathan N. Hogarh
Ebenezer J. D. Belford
Jonathan N. Hogarh
All authors share in every step of this work and all the authors read and approved the final manuscript.
Correspondence to Ufere N. Uka.
Uka, U.N., Belford, E.J.D. & Hogarh, J.N. Roadside air pollution in a tropical city: physiological and biochemical response from trees. Bull Natl Res Cent 43, 90 (2019). https://doi.org/10.1186/s42269-019-0117-7
|
CommonCrawl
|
RhoA/rock signaling mediates peroxynitrite-induced functional impairment of Rat coronary vessels
Zhijun Sun1,
Xing Wu1,
Weiping Li1,
Hui Peng1,
Xuhua Shen1,
Lu Ma2,
Huirong Liu2 &
Hongwei Li1
BMC Cardiovascular Disorders volume 16, Article number: 193 (2016) Cite this article
Diabetes-induced vascular dysfunction may arise from reduced nitric oxide (NO) availability, following interaction with superoxide to form peroxynitrite. Peroxynitrite can induce formation of 3-nitrotyrosine-modified proteins. RhoA/ROCK signaling is also involved in diabetes-induced vascular dysfunction. The study aimed to investigate possible links between Rho/ROCK signaling, hyperglycemia, and peroxynitrite in small coronary arteries.
Rat small coronary arteries were exposed to normal (NG; 5.5 mM) or high (HG; 23 mM) D-glucose. Vascular ring constriction to 3 mM 4-aminopyridine and dilation to 1 μM forskolin were measured. Protein expression (immunohistochemistry and western blot), mRNA expression (real-time PCR), and protein activity (luminescence-based G-LISA and kinase activity spectroscopy assays) of RhoA, ROCK1, and ROCK2 were determined.
Vascular ring constriction and dilation were smaller in the HG group than in the NG group (P < 0.05); inhibition of RhoA or ROCK partially reversed the effects of HG. Peroxynitrite impaired vascular ring constriction/dilation; this was partially reversed by inhibition of RhoA or ROCK. Protein and mRNA expressions of RhoA, ROCK1, and ROCK2 were higher under HG than NG (P < 0.05). This HG-induced upregulation was attenuated by inhibition of RhoA or ROCK (P < 0.05). HG increased RhoA, ROCK1, and ROCK2 activity (P < 0.05). Peroxynitrite also enhanced RhoA, ROCK1, and ROCK2 activity; these actions were partially inhibited by 100 μM urate (peroxynitrite scavenger). Exogenous peroxynitrite had no effect on the expression of the voltage-dependent K+ channels 1.2 and 1.5.
Peroxynitrite-induced coronary vascular dysfunction may be mediated, at least in part, through increased expressions and activities of RhoA, ROCK1, and ROCK2.
Diabetes mellitus (DM) is associated with disturbances in coronary arterial function that contribute to the detrimental effects of DM on the heart. Indeed, DM has been reported to cause dysfunction of endothelial-dependent vasodilation of small coronary arteries [1–3], even during the early stages of the disease [4]. This impairment in the relaxation of coronary vessels may involve, at least in part, reduced availability of nitric oxide (NO) [5, 6]. In turn, this is thought to be due to a decrease in NO synthase (NOS) activity [7, 8], as well as to the interaction of NO with superoxide to generate peroxynitrite (ONOO−) [9]. Peroxynitrite is a powerful oxidizing agent that causes nitration of aromatic amino acid residues, forming 3-nitrotyrosine (3-NT)-modified proteins. Enhanced 3-NT formation has been reported in an animal model of DM [10]. Peroxynitrite is known to participate in DM-induced endothelial dysfunction [11, 12]. Indeed, studies in DM models showed that increases in peroxynitrite levels were associated with vascular permeability and impaired vasorelaxation [13]. It has also been shown that peroxynitrite suppresses eNOS expression through RhoA activation, leading to endothelial dysfunction [11]. Peroxynitrite leads to the formation of nitrotyrosine in the artery walls, which is directly toxic to endothelial cells and leads to endothelial dysfunction [12]. The presence of nitrotyrosine is associated with microvascular anomalies in DM, and correlate with blood glucose [14].
RhoA is a small, guanosine-5'-triphosphate binding protein, bound to the plasma membrane. Upon stimulation, RhoA activates RhoA-associated protein kinase (ROCK), of which there are two isoforms (ROCK1 and ROCK2) with an overall homology of 65 % [15]. RhoA/ROCK signaling regulates numerous cellular processes, including gene transcription, organization of the actin cytoskeleton, and cell contraction, adhesion, motility, proliferation, and differentiation [16]. The RhoA/ROCK signaling pathway has been involved in several pathological conditions [16, 17] including hypertension [18, 19], atherosclerosis [20], stroke [21], coronary vasospasm [22], angina [23], ischemia-reperfusion injury [24], and heart failure [25, 26]. There is strong evidence that DM increases the expressions and activities of RhoA and ROCK in various tissues, and that the resulting phosphorylation of downstream targets enhances the contraction of vascular smooth muscle cells [27–33].
Potassium channels regulate K+ efflux and are major regulators of membrane potential of vascular smooth muscle cells. Therefore, K+ channel activity is an important factor involved in the regulation of vasoconstriction and blood vessel diameter [34]. Voltage-dependent K+ channels (Kv) limit membrane depolarization to maintain the vascular tone [35]. Kv1.2 and Kv1.5 are subunits that are expressed in vascular smooth muscle cells [35]. Many vasoconstrictors act through the inhibition of K+ channel activity [35].
However, the precise mechanisms by which peroxynitrite impairs the function of small coronary arteries remain largely unknown. The aim of the present study was to explore whether the RhoA/ROCK signaling pathway and KV are involved in mediating peroxynitrite-induced impairment of rat small coronary arteries.
Male Sprague–Dawley rats (age, 7–8 weeks; weight, 180–220 g) were provided by Vital River Laboratory Animal Technology Co., Ltd. (Beijing, China). Animals were housed under specific pathogen-free conditions and given free access to food and water. All animal experiments were performed in accordance with the Guidelines for the Care and Use of Laboratory Animals, Ministry of Science and Technology (Beijing, China). The study was approved by the Animal Care and Use Committee of Beijing Friendship Hospital, Capital Medical University (Beijing, China).
Preparation of isolated rat small coronary arteries
Rats were anesthetized with chloral hydrate (0.5 mL/100 g i.p.) and received heparin (2500 U/kg) to inhibit blood clot formation. The heart was removed and placed in HEPES-buffered (Hanks) solution at 4 °C. The cardiac apex and ascending aorta were fixed using pins, and the left auricle was identified under a 2 × 10 stereoscopic microscope (with a halogen lamp as a cold light source; Nikon, Tokyo, Japan). The left anterior descending branch of the coronary artery was identified under the left auricle, and small coronary arteries with diameters ≤200 μm were isolated rapidly as vascular rings of about 2-mm. The endothelium was denuded with air, and denudation was verified by failure to dilate in response to 1 μM acetylcholine.
For the experiments, the isolated vascular rings were incubated in 6-well dishes at 37 °C for 24 h with Dulbecco's modified Eagle's medium (DMEM; Gibco, USA) supplemented with 20 % fetal calf serum (Gibco, USA), 100 U/mL penicillin G, and 100 mg/mL streptomycin. The vascular rings were divided into the following groups: normal glucose (NG group, 5.5 mM D-glucose), L-glucose (LG group, 5.5 mM D-glucose plus 17.5 mM L-glucose), or high glucose (HG group, 23 mM D-glucose). In experiments of pharmacologic disruption of the RhoA/ROCK pathway, vascular rings of the HG group were pre-treated for 16 h with either the RhoA inhibitor, C3 transferase (1 μg/ml, HG + C3 group; Cytoskeleton, USA), or the ROCK inhibitor, Y-27632 (10 μM, HG + Y-27632 group; Sigma-Aldrich, St. Louis, MO, USA), followed by culture in high glucose medium. In experiments investigating the effects of exogenous peroxynitrite, vascular rings of the NG group were incubated (37 °C, 24 h) in the presence of additional agents: 5 μM peroxynitrite (ONOO− group), 5 μM decomposed peroxynitrite (DC-ONOO− group), or 5 μM peroxynitrite plus 100 μM urate (ONOO− + urate group). Peroxynitrite was synthesized according to published methods, and determined spectrophotometrically using the reported extinction coefficient for peroxynitrite (1670/M/cm) [36]. Before each application, the stock solution was diluted in 1 mM NaOH and rapidly added to the chamber to achieve a final concentration of 5 μM. Decomposed peroxynitrite was made by leaving peroxynitrite at room temperature for at least 2 h. Urate (Sigma-Aldrich) was used as a scavenger of peroxynitrite.
Contraction and relaxation of vascular rings in response to 4-aminopyridine and forskolin
After incubation with the appropriate experimental solution, each coronary artery ring was threaded onto two tungsten filaments (each with a diameter of 40 μm) and fixed to the bath transducers of a Multi Wire Myograph System-610 M (DMT, Aarhus, Denmark). The vascular ring was bathed in HEPES-buffered solution (8.415 g/l NaCl, 0.432 g/l KCl, 0.244 g/l MgCl2 · 6H2O, 0.277 g/l CaCl2, 2 g/l glucose, 1.1915 g/l HEPES) gassed with 100 % O2 and maintained at 37 °C. As a standardization procedure, the transmural pressure of the vascular ring was set to a baseline value of 13.33 kPa (100 mmHg); this was achieved by adjusting the tension of the vascular ring to the desired value, with the aid of the following equations:
$$ {\mathrm{P}}_{\mathrm{i}}=2\uppi {\mathrm{T}}_{\mathrm{i}}/\mathrm{I}{\mathrm{C}}_{\mathrm{i}} $$
$$ {\mathrm{T}}_{\mathrm{i}}={\mathrm{F}}_{\mathrm{i}}/2\mathrm{L} $$
$$ \mathrm{I}{\mathrm{C}}_{\mathrm{i}}=205.6+2{\mathrm{X}}_{\mathrm{i}} $$
Pi (kPa) = effective transmural pressure
Ti (mN/mm) = vascular ring tension per unit length
ICi (μm) = vascular ring inner perimeter
Fi (mN) = total tension of the vascular ring, determined by the myography system
Xi (μm) = distance between the two tungsten wires, determined by the myography system
L (mm) = vascular ring length, determined with a dissecting microscope and micrometer
Experiments were performed after an equilibration period of 60–90 min, during which the bathing medium was replaced every 15–20 min with pre-heated (37 °C) HEPES-buffered solution. The tension changes of each vascular ring were determined automatically by the myography system and recorded by a computer running the Chart 5.5 software (ADInstruments, Dunedin, New Zealand). Before examining the responses of the vascular rings to vasoactive substances, the rings were exposed repeatedly to 120 mM KCl until the contraction amplitude differences for three successive applications of KCl were less than 10 %; tension was then allowed to stabilize for 30 min before vasoactive substances were administered. Contraction was measured in response to a range of concentrations (0.1–3 mM) of 4-aminopyridine (4-AP, Sigma-Aldrich), a blocker of the Kv1 K+ channels. Relaxation in response to 1 μM forskolin (an adenylyl cyclase activator; Sigma-Aldrich) was also assessed.
Isolation of vascular smooth muscle cells from rat small coronary arteries
Small coronary arteries with diameters ≤200 μm were obtained as described above. Coronary vascular smooth muscle cells (VSMCs) were isolated enzymatically. The adventitia was dissected from the small coronary arteries, and the vessel washed three times with sterile Hanks' buffered salt solution. The vessel was incubated for 10 min at room temperature with 1 mL of phosphate-buffered saline (PBS) containing 0.1 % bovine serum albumin (BSA), and subsequently digested for 10 min at 37 °C in 1 mL of PBS containing 0.15 % papain, 0.1 % dithioerythritol (DTE), and 0.1 % BSA. The vessel was then incubated for 10 min at 37 °C with 1 mL of PBS containing 0.2 % collagenase, 0.05 % elastase, and 0.1 % soybean trypsin inhibitor. Subsequently, 5 mL of DMEM containing 20 % fetal bovine serum (FBS) were added to stop digestion. The mixture was centrifuged at 1000 rpm for 7 min, and the supernatant was discarded. The pellet was washed with sterile PBS and resuspended in 4–5 mL of culture medium. The resuspended cells were seeded into culture flasks and cultured for 24 h (DMEM containing 20 % FBS). When the cells were fully adherent, as determined under an inverted microscope, the culture medium was changed to DMEM containing 10 % FBS, 100 U/mL of penicillin G, and 100 μg/mL of streptomycin. Primary VSMCs were cultured for 7 days to reach a logarithmic phase. The cell passages were performed when cell coverage reached 80 %. Culture medium was discarded, and the cells were washed with sterile PBS and digested with 1 mL of 0.25 % trypsin (2–3 min, 37 °C). Digestion was terminated with culture medium containing FBS when the morphology of 80–90 % of the cells changed from spindle-shaped to round. The digested cells were seeded into flasks for culture, and passages could be performed after 3–4 days. Cells were then used for glucose exposure experiments.
Smooth muscle cells were identified by immunofluorescence. Single-cell suspensions were seeded into dishes containing coverslips, and a coverslip covered with a monolayer of cells was fixed with 95 % pre-cooled ethanol for 30 min. After three washes with PBS, the cells were incubated with the primary antibody (mouse anti-rat alpha-smooth muscle actin polyclonal antibody; Santa Cruz Biotechnology, Santa Cruz, CA, USA) for 30 min at 37 °C. After three washes with PBS, the cells were incubated with the secondary antibody (IgG-FITC-labeled chicken anti-mouse secondary antibody; Santa Cruz Biotechnology) for 30 min at 37 °C. After washing with PBS (three times), 4′,6-diamidino-2-phenylindole (DAPI) was added for 2 min (to stain the nuclei). After three washes with PBS, the coverslip was mounted onto a slide with mounting medium, and the cells were observed using a fluorescence microscope (Nikon, Tokyo, Japan).
Real-time PCR for mRNA expression
Total RNA of cultured coronary vascular smooth muscle cells was extracted with Trizol (Invitrogen, Carlsbad, CA, USA), and RNA concentration and purity were determined using an ultraviolet spectrophotometer. Total RNA (1 μg) was reverse-transcribed into cDNA with reverse transcriptase (Shanghai Jierdun Biotech Co. Ltd., Shanghai, China), according to the manufacturer's protocol. The expressions of RhoA, ROCK1, ROCK2, Kv1.2, and Kv1.5 mRNA were assessed by real-time polymerase chain reaction (PCR) of the cDNA (2 μg), using an ABI Step One Plus Real-Time-PCR System (Applied Biosystems, Foster City, CA, USA), SYBR Green Master Mix (Applied Biosystems), and the following primers (Invitrogen):
RhoA (103 bp):
F: 5′-CATCCCAGAAAAGTGGACTCCA- 3′
R: 5′-CCTTGTGTGCTCATCATTCCG- 3′
ROCK1 (113 bp):
F: 5′-GAATGACATGCAAGCGCAAT- 3′
R: 5′-GTCCAAAAGTTTTGCACGCA- 3′
F: 5′-GAAACAACTGGATGAAGCTAATGC- 3′
R: 5′-GTTTCAAGCAGGCAGTTTTTATCTT- 3′
Kv1.2:
F: 5′CGT CAG CTT CTG TCT GGA AAC C 3′
R: 5′TGC ATG TCC TCG TTC TCA TCC 3′
F: 5′-CCTGTCCCCGTCATCGTCTC- 3′
R: 5′-ACCTTCCGTTGACCCCCTGT- 3′
GAPDH (75 bp):
F: 5′-CCTGCCAAGTATGATGACA- 3′
R: 5′- GTAGCCCAGGATGCCC - 3′
GAPDH was used as a reference to obtain the relative fold changes for the target genes, using the comparative Ct method. Relative mRNA expressions were estimated using the 2-△△CT method.
Standard immunohistochemistry protocols were applied to small coronary arteries treated for 24 h with the various experimental solutions, in order to determine the protein expression levels of RhoA, ROCK1, and ROCK2. The small coronary arteries were fixed with 4 % paraformaldehyde and sectioned (4 μm). The following primary antibodies (100 μL) were used: anti-RhoA (1:200 dilution; BS1782; Bioworld Technology Inc., St. Louis Park, MN, USA), anti-ROCK1 (1:200 dilution; sc-6055; Santa Cruz Biotechnology), and anti-ROCK2 (1:250 dilution; sc-1851; Santa Cruz Biotechnology). The secondary antibody was biotinylated goat anti-rabbit IgG (Beyotime Institute of Biotechnology, Shanghai, China). Sections were visualized with diaminobenzidine (DAB; Shanghai Jierdun Biotech) and counterstained with hematoxylin and eosin (H&E). Images were captured with a digital camera (Nikon) and analyzed using the IMS imaging processing system (Shanghai Jierdun Biotech). Positively stained regions were counted and analyzed. Cardiomyocytes were excluded.
Adherent cells were cultured in 10-cm dishes for protein isolation. The cell culture medium was discarded. The cells were washed with PBS, and 1 mL of lysis buffer was added for cell lysis. A 200–1000 μL sample containing about 200–1000 μg of protein was mixed with 1 μg of IgG (that shared the same host species as the IgG used in the immunoprecipitation) and 20 μL of protein A/G agarose. The mixture was incubated at 4 °C for 30–120 min, with gentle agitation, and then centrifuged at 1000 g for 5 min. The supernatant was isolated for subsequent protein immunoprecipitation. Primary antibodies (0.2–2 μg) against RhoA (1:400), ROCK1 (1:400), ROCK2 (1:400), Kv1.2 (1:500), or Kv1.5 (1:500) were added to the supernatant, and the mixture was incubated overnight at 4 °C with gentle agitation. Then, 20 μL of thoroughly resuspended protein A/G agarose was added, and this was incubated for a further 1–3 h at 4 °C, with gentle agitation. The mixture was centrifuged at 1000 g for 5 min or transiently at high speed, and the supernatant was discarded. The pellet was washed five times with 0.5–1 mL of lysis buffer (used for protein isolation) or PBS, and resuspended in 20–40 μL of 1× SDS-PAGE loading buffer. After transient centrifugation at high speed, the sample was boiled at 100 °C for 3–5 min. Partial or total samples were used for protein quantification or SDS-PAGE electrophoresis, or stored at −20 °C for later use.
Western blotting for protein expression
Total protein was extracted from artery rings, and the protein concentration determined using a bicinchoninic acid (BCA) protein assay kit (Thermo Fisher Scientific, Waltham, MA, USA). Immunoblotting was performed with the following primary antibodies: anti-RhoA (Bioworld), anti-ROCK1 (Santa Cruz Biotechnology), anti-ROCK2 (Santa Cruz Biotechnology), anti- Kv1.2, or anti-Kv1.5. HRP-conjugated anti-rabbit secondary antibody was used at a dilution of 1:2000. Detection of GAPDH (diluted 1:1500; #5471; Cell Signaling Technology, Danvers, MA, USA) served as an internal loading control. All blots were scanned with the LabWorks image processing system (UVP Inc., Upland, CA, USA). Protein band pixel values were calculated using Gel-pro Analyzer 4.0 (Media Cybernetics Inc., Rockville, MD, USA).
Measurement of RhoA and ROCK activities
RhoA activation was measured using the luminescence-based G-LISA Activation Assay kit (Cytoskeleton Inc., Denver, CO, USA), according to the manufacturer's instructions. Values were normalized to the protein content using a colorimetric assay (Bio-Rad Laboratories, Hercules, CA, USA), according to the manufacturer's recommendations. ROCK1 and ROCK2 activities were detected using the Kinase Activity Spectroscopy Kit (GMS50184.3 for ROCK1 and GMS50184.1 for ROCK2; Genmed Scientific Inc., Arlington, MA, USA), according to the manufacturer's instructions.
Continuous data were expressed as means ± standard deviation (SD). Statistical analyses were performed using SPSS 18.0 (IBM, Armonk, NY, USA). Comparisons between groups were performed using one-way analysis of variance (ANOVA) or the Student's t-test, as appropriate. The Bonferroni correction was applied when comparing three or more groups. P values <0.05 were considered statistically significant.
Impairment of vascular ring contraction and dilation by high glucose or peroxynitrite may involve RhoA/ROCK signaling
Figure 1a presents concentration-response curves showing the contractile responses of vascular rings in the various experimental groups treated with 4-AP. Maximal contraction in response to 3 mM 4-AP, was significantly smaller in the LG (2.08 ± 0.17 mN) and HG (1.37 ± 0.22 mN) groups than in the NG group (3.15 ± 0.31 mN) (P < 0.05; n = 6). Interestingly, inhibition of RhoA or ROCK partially reversed this effect of high glucose: the maximal contractile response of rings treated with 4-AP was significantly larger in the HG + C3 (2.13 ± 0.09 mN) and HG + Y-27632 (2.02 ± 0.16 mN) groups than in the HG group (P < 0.05; n = 6). Similar observations were made in experiments using forskolin (Fig. 1b): dilation in response to 1 μM forskolin was significantly reduced in the LG (39.47 ± 1.32 %) and HG (35.20 ± 1.98 %) groups compared with the NG group (48.97 ± 1.77 %), and was higher in both the HG + C3 (39.80 ± 1.59 %) and HG + Y-27632 (39.68 ± 1.57 %) groups than in the HG group (P < 0.05; n = 6).
Impairment of vascular ring contraction and dilation by high glucose involves RhoA/ROCK signaling. a Concentration-response curves showing the contractile responses of rat small coronary artery vascular rings treated with 4-AP (a voltage-gated K+ channel blocker; 0.1–3 mM) under various experimental conditions. Treatment with high glucose (23 mM D-glucose) was associated with attenuation of contraction of rings treated with 4-AP (compared with 5.5 mM D-glucose) that was partially reversed by C3 transferase (a RhoA inhibitor) and Y-27632 (a ROCK inhibitor). b Rat small coronary artery vascular rings treated with high glucose (23 mM D-glucose) showed an impairment of dilation (compared with 5.5 mM D-glucose) in response to 1 μM forskolin (adenylyl cyclase activator); this impairment was partially reversed by C3 transferase and Y-27632. Data are shown as mean ± SD (n = 6). *P < 0.05 vs. NG. #P < 0.05 vs. HG. c Concentration-response curves showing the contractile responses of rat small coronary artery vascular rings to 4-AP (a voltage-gated K+ channel blocker; 0.1-3 mM) under various experimental conditions. Treatment with peroxynitrite (5 μM peroxynitrite) was associated with attenuation of contraction of rings treated with 4-AP (compared with 5.5 mM D-glucose and 5 μM decomposed peroxynitrite) that was partially reversed by urate, C3 transferase, and Y-27632. d Rat coronary small coronary artery rings treated with ONOO (5 μM peroxynitrite) showed an impairment of dilation (compared with 5.5 mM D-glucose and 5 μM decomposed peroxynitrite) in response to 1 μM forskolin (adenylyl cyclase activator); this impairment was partially reversed by urate, C3 transferase, and Y-27632. Data are shown as mean ± SD (n = 6). *P < 0.05 vs. ONOO
Additional experiments were carried out to determine whether exogenous peroxynitrite (5 μM) could mimic these effects of high glucose. The contractile response of rings treated with 3 mM 4-AP was significantly reduced in the ONOO− group (1.54 ± 0.21 mN) compared with the NG group (3.15 ± 0.31 mN) (P < 0.05; n = 6), but DC-ONOO− was without effect (2.95 ± 0.26 mN). Furthermore, the attenuation of contraction by ONOO- was inhibited (albeit not completely) by urate (2.39 ± 0.18 mN), C3 transferase (2.17 ± 0.15 mN), and Y-27632 (2.27 ± 0.10 mN) (P < 0.05 compared with the ONOO− group; n = 6) (Fig. 1c). Similarly, relaxation to forskolin was significantly smaller in the ONOO− group (36.37 ± 1.80 %) than in the NG (48.97 ± 1.77 %), DC-ONOO− (48.55 ± 1.64 %), ONOO− + urate (44.17 ± 1.14 %), ONOO− + C3 (42.02 ± 1.73 %), and ONOO− + Y-27632 (41.22 ± 1.53 %) groups (P < 0.05; n = 6) (Fig. 1d).
Treatment with high glucose increases protein expressions of RhoA, ROCK1, and ROCK2 detected by immunohistochemistry in denuded vessels
Figure 2 shows representative images obtained from immunohistochemistry experiments carried out to detect the protein expressions of RhoA, ROCK1, and ROCK2 in the various groups. Quantitative analyses of the immunohistochemistry data are shown in Fig. 3. Protein expressions of RhoA, ROCK1, and ROCK2 were significantly higher in the HG group than in the NG group (P < 0.05; n = 3). Importantly, inhibition of RhoA (C3 transferase) or ROCK (Y-27632) significantly attenuated the upregulation of ROCK1 and ROCK2 induced by high glucose, while Y-27632 inhibited the upregulation of RhoA (P < 0.05; n = 3). This suggests that an increase in RhoA/ROCK signaling induced by high glucose can in turn feedback to upregulate the protein expressions of RhoA, ROCK1, and ROCK2.
Immunohistochemistry showing protein expressions of RhoA, ROCK1, and ROCK2 in rat coronary small coronary artery rings. Sections were stained with rabbit primary antibodies against RhoA, ROCK1, or ROCK2, followed by goat anti-rabbit biotinylated secondary antibody, and then visualized with diaminobenzidine. Sections were counterstained with hematoxylin and eosin. Brown staining in the image is indicative of expression of the protein of interest. Treatment with high glucose (23 mM D-glucose) was associated with enhanced expressions of RhoA, ROCK1, and ROCK2 compared with normal glucose (5.5 mM). These effects of high glucose were partially reversed by C3 transferase (a RhoA inhibitor) and Y-27632 (a ROCK inhibitor). NG: 5.5 mM glucose; HG: 23 mM D-glucose; HG + C3: 23 mM D-glucose and 1 μg/ml C3 transferase; HG + Y27632: 23 mM D-glucose and 10 μM Y-27632. Magnification: ×200
Mean protein expressions of RhoA, ROCK1, and ROCK2 in rat small coronary artery vascular rings determined from immunohistochemistry experiments. a Protein expression of RhoA. b Protein expression of ROCK1. c Protein expression of ROCK2. NG: 5.5 mM glucose; LG: 5.5 mM D-glucose plus 17.5 mM L-glucose; HG: 23 mM D-glucose; HG + C3: 23 mM D-glucose and 1 μg/ml C3 transferase; HG + Y27632: 23 mM D-glucose and 10 μM Y-27632. Data are shown as mean ± SD (n = 6 rats, 5–6 vessels from each). * P < 0.05 compared with the NG group; # P < 0.05 compared with the LG group; ▲ P < 0.05 compared with the HG group. Cardiomyocytes were excluded
Treatment with high glucose increases mRNA and protein expressions of RhoA, ROCK1, and ROCK2 detected by real-time PCR and Western blotting
The mRNA expression of RhoA, ROCK1, and ROCK2, detected using real-time PCR, was significantly enhanced by high glucose (P < 0.05; n = 5; Fig. 4a–c). No significant effects of exogenous peroxynitrite were observed (Fig. 4d–f), although a trend toward a small increase in expression could not be excluded. Consistent with the immunohistochemistry data, high glucose significantly enhanced the protein expressions of RhoA, ROCK1, and ROCK2 measured using western blotting (P < 0.05; n = 5; Fig. 5).
mRNA expressions of RhoA, ROCK1, and ROCK2 in rat coronary vascular smooth muscle cells. mRNA expression was determined using real-time PCR. Treatment with high glucose (23 mM D-glucose) resulted in an increase in the mRNA expression of RhoA (a), ROCK1 (b), and ROCK2 (c). However, exogenous peroxynitrite was without significant effect on the mRNA expression of RhoA (d), ROCK1 (e), and ROCK2 (f). NG: 5.5 mM glucose; LG: 5.5 mM D-glucose plus 17.5 mM L-glucose; HG: 23 mM D-glucose; ONOO: 5 μM peroxynitrite; DC-ONOO: 5 μM decomposed peroxynitrite; ONOO + Urate: 5 μM peroxynitrite and 100 μM urate (to scavenge peroxynitrite). Data are shown as mean ± SD (n = 5). * P < 0.05 compared with the NG group
Protein expressions of RhoA, ROCK1 and ROCK2 in rat coronary vascular smooth muscle cells. Protein expression was determined using the Western blot technique. a Representative blots for RhoA, ROCK1, and ROCK2; GAPDH expression was used as an internal reference. b Mean data for protein expression levels. Treatment with peroxynitrite was associated with enhanced the expression of RhoA, ROCK1, and ROCK2 compared with controls. NG: 5.5 mM glucose; LG: 5.5 mM D-glucose plus 17.5 mM L-glucose; HG: 23 mM D-glucose. Data are shown as mean ± SD (n = 5). * P < 0.05 compared with the NG group
Treatment with high glucose or exogenous peroxynitrite increases the activity of RhoA, ROCK1, and ROCK2
Further studies were undertaken to determine whether the high glucose-induced upregulation of RhoA, ROCK1, and ROCK2 expression translated into enhanced activities. As shown in Fig. 6a–c, treatment with high glucose was associated with significant increases in RhoA, ROCK1, and ROCK2 activity (P < 0.05; n = 5). Interestingly, 5 μM peroxynitrite (but not decomposed peroxynitrite) mimicked the effects of high glucose, and these actions of exogenous peroxynitrite were partially inhibited by urate (Fig. 6d–f).
RhoA, ROCK1, and ROCK2 activities in rat coronary vascular smooth muscle cells. RhoA activation was measured using the luminescence-based G-LISA Activation Assay kit; ROCK1 and ROCK2 activities were detected using the Genmed Kinase Activity Spectroscopy Kit. Treatment with high glucose (23 mM D-glucose) resulted in an increase in the activities of RhoA (a), ROCK1 (b), and ROCK2 (c). Similarly, treatment with 5 μM peroxynitrite was associated with increases in RhoA (a), ROCK1 (b), and ROCK2 (c) activities; these effects could be partially inhibited by 100 μM urate (a scavenger of peroxynitrite). NG: 5.5 mM glucose; LG: 5.5 mM D-glucose plus 17.5 mM L-glucose; HG: 23 mM D-glucose; ONOO−: 5 μM peroxynitrite; DC-ONOO: 5 μM decomposed peroxynitrite; ONOO + Urate: 5 μM peroxynitrite and 100 μM urate (to scavenge peroxynitrite). Data are shown as mean ± SD (n = 5). * P < 0.05 compared with the NG group; & P < 0.05 compared with the ONOO− group
mRNA and protein expressions of Kv1.2 and Kv1.5 are not affected by exogenous peroxynitrite
To investigate whether the functional impairment of vascular ring contraction/relaxation by high glucose might involve peroxynitrite-mediated changes in the expressions of Kv, the mRNA and protein expression of Kv1.2 and Kv1.5 were determined. As shown in Fig. 7, exogenous peroxynitrite (5 μM) had no significant effects on the mRNA and protein expression of Kv1.2 and Kv1.5 (n = 5).
mRNA and protein expression of Kv1.2 and Kv1.5 in coronary vascular smooth muscle cells. a Kv1.2 mRNA expression (real-time PCR) was not affected by 5 μM peroxynitrite. b Kv1.5 mRNA expression (real-time PCR) was not influenced by 5 μM peroxynitrite. c Representative blots for Kv1.2 and Kv1.5. GAPDH expression was used as an internal reference. d Mean data for protein expression levels. Peroxynitrite (5 μM) had no effect on the protein expression of Kv1.2 and Kv1.5. NG: 5.5 mM glucose; ONOO−: 5 μM peroxynitrite; DC-ONOO: 5 μM decomposed peroxynitrite; C3 or ONOO− + C3: 5 μM peroxynitrite and C3 transferase (to inhibit RhoA); Y-27632 or ONOO− + Y-27632: 5 μM peroxynitrite and Y-27632 (to inhibit ROCK). Data are shown as mean ± SD (n = 5)
Exogenous peroxynitrite induces 3-NT-modification of proteins
Exogenous peroxynitrite (5 μM) caused significant increases in the levels of 3-NT-modified RhoA, ROCK1, ROCK2, and Kv1.2 (P < 0.05; n = 5; Fig. 8). However, no effect on Kv1.5 was observed (Fig. 8).
Nitrotyrosine-modification of proteins by peroxynitrite. 3-NT-modified proteins were detected by immunoprecipitation. a Representative blots showing detection of 3-NT-modified RhoA, ROCK1, and ROCK2. b Representative blots showing detection of 3-NT-modified Kv1.2 and Kv1.5. c Quantification of 3-NT-modified RhoA, ROCK1, and ROCK2. 5 μM peroxynitrite (but not decomposed peroxynitrite) induced an increase in the levels of 3-NT-modified RhoA, ROCK1, and ROCK2; this was reversed by 100 μM urate (a scavenger of peroxynitrite). d Quantification of 3-NT-modified Kv1.2 and Kv1.5. 5 μM peroxynitrite (but not decomposed peroxynitrite) induced an increase in the levels of 3-NT-modified Kv1.2; this was reversed by 100 μM urate. In contrast, peroxynitrite had no effect on levels of 3-NT-modified Kv1.5. NG: 5.5 mM glucose; ONOO−: 5 μM peroxynitrite; DC-ONOO: 5 μM decomposed peroxynitrite; ONOO + Urate: 5 μM peroxynitrite and 100 μM urate (to scavenge peroxynitrite); C3 or ONOO− + C3: 5 μM peroxynitrite and C3 transferase (to inhibit RhoA); Y-27632 or ONOO− + Y-27632: 5 μM peroxynitrite and Y-27632 (to inhibit ROCK). Data are shown as mean ± SD (n = 5). * P < 0.05 compared with NG; & P < 0.05 compared with ONOO−
The aim of the present study was to explore whether the RhoA/ROCK signaling pathway is involved in mediating peroxynitrite-induced impairment of rat small coronary arteries. The main findings were that short-term exposure to high glucose and peroxynitrite caused functional impairment of rat small coronary artery rings that was partially reversed by RhoA or ROCK inhibition, and enhancement of RhoA, ROCK1, and ROCK2 activity. Therefore, increased RhoA, ROCK1, and ROCK2 expression and/or activities may contribute to peroxynitrite-induced coronary vascular dysfunction. Although the underlying mechanisms remain to be established, the effects of peroxynitrite may involve 3-NT modification of RhoA, ROCK1, ROCK2, and/or other proteins, which in turn enhances RhoA/ROCK signaling. Vascular ring increase in constrictor tone to 4-AP treatment was smaller in HG and peroxynitrite-treated vessels, meaning that Kv channels contribute less to the maintenance of dilator tone in HG or peroxynitrite incubated vessels than in NG vessels.
The interaction of NO with superoxide reduces NO availability (which contributes to coronary vascular dysfunction) and generates peroxynitrite that causes nitration of various proteins [5, 6, 9]. Increased 3-NT formation has been observed in an animal model of DM [10], and peroxynitrite has been suggested to target subcellular compartments in vascular endothelium [37]. Interestingly, the enzymes involved in NO synthesis are compartmentalized in caveolae [38], and the impairment of flow-mediated dilation of small coronary arteries in patients with DM may be due to disruption of caveolae by peroxynitrite and hence endothelial NOS uncoupling [39]. Our observations that peroxynitrite mimicked the effects of high glucose (in terms of coronary arteriolar dysfunction) suggest that nitration of cellular proteins contributes to the detrimental effects of DM on the coronary circulation, but these findings must be taken with caution because there was no endothelial contribution in the present study.
RhoA/ROCK signaling is involved in numerous cellular processes, and may play a role in several pathological conditions [16, 17]. Various studies have reported that DM increases the expression and activity of RhoA and ROCK [27–33], consistent with the effects of HG observed in the present study. Both ROCK1 and ROCK2 are expressed in vascular endothelial and smooth muscle cells [33, 40]. In models of DM, ROCK inhibition improves microvascular damage, enhances cerebral vasodilation, reverses vasoconstriction, and improves coronary dysfunction through an enhancement of endothelial NOS [26, 27, 32, 33, 41, 42]. An impairment of endothelial NOS by RhoA and ROCK inhibitors would mitigate against peroxynitrite-induced disruption of caveolae and endothelial NOS uncoupling [39], potentially underlying our observations that C3 transferase and Y-27632 attenuated peroxynitrite-induced vascular dysfunction. But again, the effect of endothelium could not be assessed in the present study, and additional observations are necessary.
The findings that impaired vascular function and increased RhoA, ROCK1, and ROCK2 expression/activity occurred after only a 24-h exposure to HG imply that activation of the RhoA/ROCK pathway is an early event in the pathogenesis of diabetic complications. Endothelial-dependent vasodilation is compromised during the early stages of DM [11, 43]. Interestingly, in a rat model of early-stage DM, there was a trend toward upregulation of ROCK1 and ROCK2 expression, and a larger effect of ROCK inhibition (with fasudil) in the presence of NOS and cyclooxygenase blockade; furthermore, fasudil inhibited the occurrence of focal and segmental constrictions [44]. The involvement of RhoA/ROCK signaling in the early stages of DM would make ROCK a promising therapeutic target for treating diabetic complications [16, 45].
Hyperglycemia has been suggested to stimulate RhoA/ROCK signaling through the generation of reactive oxygen species (ROS) [46]. Interestingly, peroxynitrite may be a mediator of enhanced ROCK signaling in endothelial cells [40, 47], in agreement with the present findings. Although the mechanisms remain unclear, peroxynitrite and the Rho/ROCK pathway may interact via a positive feedback loop, as has been proposed for the relationship between Rho/ROCK, protein kinase C-β2 (PKCβ2), inducible NOS (iNOS), and ROS. PKCβ2 [48] is believed to contribute to diabetic complications, and inhibitors of this kinase can improve DM-induced retinopathy, cardiomyopathy, nephropathy, and neuropathy [49, 50]. Hyperglycemia is thought to activate PKCβ2, which in turn activates iNOS and RhoA/ROCK signaling [51–53]. PKCβ2 inhibition reduces iNOS-mediated cardiovascular abnormalities in diabetic rats [52]. iNOS may contribute to enhanced RhoA and ROCK expressions/activities; consistent with our observations, ROCK inhibition can downregulate RhoA expression [53]. Rats with experimental DM show increases in cardiac ROCK2 expression and PKCβ2 expression and activity; intriguingly, ROCK2 appears to directly interact with and activate PKCβ2 through phosphorylation at the T641 site [51]. Thus, RhoA/ROCK may contribute to cardiac dysfunction in DM by activating PKCβ2 and generating ROS, through a positive-feedback loop involving iNOS [54]. ROS may contribute to the PKCβ2 activation [55] and iNOS upregulation [56], while the effects of PKCβ2 may, in part, be mediated by ROS [57, 58] and induction of iNOS [52]. Further research is required to establish whether peroxynitrite-induced activation of the RhoA/ROCK pathway leads to alterations in NO signaling (such as iNOS induction) that exacerbate the generation of peroxynitrite. In addition, other signaling mechanisms may be involved in alterations to NO signaling, such as the JNK and TGFβ/SMAD pathways [59].
KV are involved in the regulation of vascular tone by limit membrane depolarization [35]. A recent study has shown that diabetes led to reduced presence of KV1.2 in nerves from diabetic mice and diabetic patients [60]. Another study suggests that diabetes might be associated with gender-specific decreases in KV1.5 levels in myocytes from male mice, and that this effect might be triggered by the renin-angiotensin system [61]. In the present study, exogenous peroxynitrite had no significant effects on the mRNA and protein expression of Kv1.2 and Kv1.5, but significantly increased the levels of 3-NT-modified Kv1.2. A previous study revealed that excess peroxynitrite production might impair KV function in DM, which is consistent with the present study [62]. Another study has shown that peroxynitrite over-production induced by high glucose impaired KV-mediated vascular dilation mediated by cAMP [63]. 4-AP is a KV blocker that allow the separation of the effects of KV and Ca2+-activated K+ channels [35]. In the present study, 4-AP decreased the contraction response to a greater extent in glucose-treated rings compared with controls. Forskolin is an activator of adenylyl cyclase, leading to an increase in intracellular cAMP levels. The decreased vasorelaxation observed in the present study also point toward a role of KV in the process of impaired vascular response in DM. Taken together, these results suggest that Kv channels contribute less to the maintenance of dilator tone in HG or peroxynitrite incubated vessels than in NG vessels. However, further study is necessary to adequately assess the role and exact mechanisms of KV in DM and vascular dysfunction.
Vascular ring increase in constrictor tone to 4-AP treatment was smaller in HG and peroxynitrite-treated vessels, meaning that Kv channels contribute less to the maintenance of dilator tone in HG or peroxynitrite incubated vessels than in NG vessels, possibly via 3-NT modification of RhoA, ROCK1, ROCK2, KV1.2, and/or other proteins. Importantly, these results suggest that even a short-term exposure to glucose was might be sufficient to induce this impaired response.
ANOVA:
BSA:
DAB:
Diaminobenzidine
DMEM:
Dulbecco's modified Eagle's medium
FBS:
H&E:
Hematoxylin and eosin
HG:
High glucose
iNOS:
Inducible NOS
L-glucose
NG:
Normal glucose
NO synthase
PCR:
Polymerase chain reaction
PKCβ2:
Protein kinase C-β2
ROCK:
RhoA-associated protein kinase
ROS:
VSMCs:
Vascular smooth muscle cells
Nahser Jr PJ, Brown RE, Oskarsson H, Winniford MD, Rossen JD. Maximal coronary flow reserve and metabolic coronary vasodilation in patients with diabetes mellitus. Circulation. 1995;91(3):635–40.
Prior JO, Quinones MJ, Hernandez-Pampaloni M, Facta AD, Schindler TH, Sayre JW, et al. Coronary circulatory dysfunction in insulin resistance, impaired glucose tolerance, and type 2 diabetes mellitus. Circulation. 2005;111(18):2291–8. doi:10.1161/01.CIR.0000164232.62768.51.
Yonaha O, Matsubara T, Naruse K, Ishii H, Murohara T, Nakamura J, et al. Effects of reduced coronary flow reserve on left ventricular function in type 2 diabetes. Diabetes Res Clin Pract. 2008;82(1):98–103. doi:10.1016/j.diabres.2008.06.020.
Jenkins MJ, Edgley AJ, Sonobe T, Umetani K, Schwenke DO, Fujii Y, et al. Dynamic synchrotron imaging of diabetic rat coronary microcirculation in vivo. Arterioscler Thromb Vasc Biol. 2012;32(2):370–7. doi:10.1161/ATVBAHA.111.237172.
Bagi Z, Koller A, Kaley G. Superoxide-NO interaction decreases flow- and agonist-induced dilations of coronary arterioles in type 2 diabetes mellitus. Am J Physiol Heart Circ Physiol. 2003;285(4):H1404–10. doi:10.1152/ajpheart.00235.2003.
Bagi Z, Koller A, Kaley G. PPARgamma activation, by reducing oxidative stress, increases NO bioavailability in coronary arterioles of mice with type 2 diabetes. Am J Physiol Heart Circ Physiol. 2004;286(2):H742–8. doi:10.1152/ajpheart.00718.2003.
Forbes JM, Cooper ME. Mechanisms of diabetic complications. Physiol Rev. 2013;93(1):137–88. doi:10.1152/physrev.00045.2011.
Forstermann U, Sessa WC. Nitric oxide synthases: regulation and function. Eur Heart J. 2012;33(7):829–37. doi:10.1093/eurheartj/ehr30. 37a-37d.
Sharma A, Bernatchez PN, de Haan JB. Targeting endothelial dysfunction in vascular complications associated with diabetes. Int J Vasc Med. 2012;2012:750126. doi:10.1155/2012/750126.
Li H, Gutterman DD, Rusch NJ, Bubolz A, Liu Y. Nitration and functional loss of voltage-gated K+ channels in rat coronary microvessels exposed to high glucose. Diabetes. 2004;53(9):2436–42.
El-Remessy AB, Tawfik HE, Matragoon S, Pillai B, Caldwell RB, Caldwell RW. Peroxynitrite mediates diabetes-induced endothelial dysfunction: possible role of Rho kinase activation. Exp Diabetes Res. 2010;2010:247861. doi:10.1155/2010/247861.
Pacher P, Szabo C. Role of peroxynitrite in the pathogenesis of cardiovascular complications of diabetes. Curr Opin Pharmacol. 2006;6(2):136–41. doi:10.1016/j.coph.2006.01.001.
Romero MJ, Platt DH, Tawfik HE, Labazi M, El-Remessy AB, Bartoli M, et al. Diabetes-induced coronary vascular dysfunction involves increased arginase activity. Circ Res. 2008;102(1):95–102. doi:10.1161/CIRCRESAHA.107.155028.
Szabo C, Zanchi A, Komjati K, Pacher P, Krolewski AS, Quist WC, et al. Poly(ADP-Ribose) polymerase is activated in subjects at risk of developing type 2 diabetes and is associated with impaired vascular reactivity. Circulation. 2002;106(21):2680–6.
Nakagawa O, Fujisawa K, Ishizaki T, Saito Y, Nakao K, Narumiya S. ROCK-I and ROCK-II, two isoforms of Rho-associated coiled-coil forming protein serine/threonine kinase in mice. FEBS Lett. 1996;392(2):189–93.
Zhou H, Li YJ. Rho kinase inhibitors: potential treatments for diabetes and diabetic complications. Curr Pharm Des. 2012;18(20):2964–73.
Nunes KP, Rigsby CS, Webb RC. RhoA/Rho-kinase and vascular diseases: what is the link? Cell Mol Life Sci. 2010;67(22):3823–36. doi:10.1007/s00018-010-0460-1.
Chitaley K, Weber D, Webb RC. RhoA/Rho-kinase, vascular changes, and hypertension. Curr Hypertens Rep. 2001;3(2):139–44.
Mukai Y, Shimokawa H, Matoba T, Kandabashi T, Satoh S, Hiroki J, et al. Involvement of Rho-kinase in hypertensive vascular disease: a novel therapeutic target in hypertension. FASEB J. 2001;15(6):1062–4.
Morishige K, Shimokawa H, Eto Y, Kandabashi T, Miyata K, Matsumoto Y, et al. Adenovirus-mediated transfer of dominant-negative rho-kinase induces a regression of coronary arteriosclerosis in pigs in vivo. Arterioscler Thromb Vasc Biol. 2001;21(4):548–54.
Shibuya M, Hirai S, Seto M, Satoh S, Ohtomo E, Fasudil Ischemic Stroke Study G. Effects of fasudil in acute ischemic stroke: results of a prospective placebo-controlled double-blind trial. J Neurol Sci. 2005;238(1–2):31–9. doi:10.1016/j.jns.2005.06.003.
Kandabashi T, Shimokawa H, Miyata K, Kunihiro I, Eto Y, Morishige K, et al. Evidence for protein kinase C-mediated activation of Rho-kinase in a porcine model of coronary artery spasm. Arterioscler Thromb Vasc Biol. 2003;23(12):2209–14. doi:10.1161/01.ATV.0000104010.87348.26.
Masumoto A, Mohri M, Shimokawa H, Urakami L, Usui M, Takeshita A. Suppression of coronary artery spasm by the Rho-kinase inhibitor fasudil in patients with vasospastic angina. Circulation. 2002;105(13):1545–7.
Hamid SA, Bower HS, Baxter GF. Rho kinase activation plays a major role as a mediator of irreversible injury in reperfused myocardium. Am J Physiol Heart Circ Physiol. 2007;292(6):H2598–606. doi:10.1152/ajpheart.01393.2006.
Kishi T, Hirooka Y, Masumoto A, Ito K, Kimura Y, Inokuchi K, et al. Rho-kinase inhibitor improves increased vascular resistance and impaired vasodilation of the forearm in patients with heart failure. Circulation. 2005;111(21):2741–7. doi:10.1161/CIRCULATIONAHA.104.510248.
Kobayashi N, Horinaka S, Mita S, Nakano S, Honda T, Yoshida K, et al. Critical role of Rho-kinase pathway for cardiac performance and remodeling in failing rat hearts. Cardiovasc Res. 2002;55(4):757–67.
Arita R, Hata Y, Nakao S, Kita T, Miura M, Kawahara S, et al. Rho kinase inhibition by fasudil ameliorates diabetes-induced microvascular damage. Diabetes. 2009;58(1):215–26. doi:10.2337/db08-0762.
Budzyn K, Marley PD, Sobey CG. Targeting Rho and Rho-kinase in the treatment of cardiovascular disease. Trends Pharmacol Sci. 2006;27(2):97–104. doi:10.1016/j.tips.2005.12.002.
Cicek FA, Kandilci HB, Turan B. Role of ROCK upregulation in endothelial and smooth muscle vascular functions in diabetic rat aorta. Cardiovasc Diabetol. 2013;12:51. doi:10.1186/1475-2840-12-51.
Failli P, Alfarano C, Franchi-Micheli S, Mannucci E, Cerbai E, Mugelli A, et al. Losartan counteracts the hyper-reactivity to angiotensin II and ROCK1 over-activation in aortas isolated from streptozotocin-injected diabetic rats. Cardiovasc Diabetol. 2009;8:32. doi:10.1186/1475-2840-8-32.
Miao L, Calvert JW, Tang J, Zhang JH. Upregulation of small GTPase RhoA in the basilar artery from diabetic (mellitus) rats. Life Sci. 2002;71(10):1175–85.
Noma K, Oyama N, Liao JK. Physiological role of ROCKs in the cardiovascular system. Am J Physiol Cell Physiol. 2006;290(3):C661–8. doi:10.1152/ajpcell.00459.2005.
Nuno DW, Harrod JS, Lamping KG. Sex-dependent differences in Rho activation contribute to contractile dysfunction in type 2 diabetic mice. Am J Physiol Heart Circ Physiol. 2009;297(4):H1469–77. doi:10.1152/ajpheart.00407.2009.
Sobey CG. Potassium channel function in vascular disease. Arterioscler Thromb Vasc Biol. 2001;21(1):28–38.
Ko EA, Han J, Jung ID, Park WS. Physiological roles of K+ channels in vascular smooth muscle cells. J Smooth Muscle Res. 2008;44(2):65–81.
Beckman JS, Beckman TW, Chen J, Marshall PA, Freeman BA. Apparent hydroxyl radical production by peroxynitrite: implications for endothelial injury from nitric oxide and superoxide. Proc Natl Acad Sci. 1990;87(4):1620–4.
Pacher P, Beckman JS, Liaudet L. Nitric oxide and peroxynitrite in health and disease. Physiol Rev. 2007;87(1):315–424. doi:10.1152/physrev.00029.2006.
Cohen AW, Hnasko R, Schubert W, Lisanti MP. Role of caveolae and caveolins in health and disease. Physiol Rev. 2004;84(4):1341–79. doi:10.1152/physrev.00046.2003.
Cassuto J, Dou H, Czikora I, Szabo A, Patel VS, Kamath V, et al. Peroxynitrite disrupts endothelial caveolae leading to eNOS uncoupling and diminished flow-mediated dilation in coronary arterioles of diabetic patients. Diabetes. 2014;63(4):1381–93. doi:10.2337/db13-0577.
Ming XF, Barandier C, Viswambharan H, Kwak BR, Mach F, Mazzolai L, et al. Thrombin stimulates human endothelial arginase enzymatic activity via RhoA/ROCK pathway: implications for atherosclerotic endothelial dysfunction. Circulation. 2004;110(24):3708–14. doi:10.1161/01.CIR.0000142867.26182.32.
Didion SP, Lynch CM, Baumbach GL, Faraci FM. Impaired endothelium-dependent responses and enhanced influence of Rho-kinase in cerebral arterioles in type II diabetes. Stroke. 2005;36(2):342–7. doi:10.1161/01.STR.0000152952.42730.92.
Matsumoto T, Kobayashi T, Ishida K, Taguchi K, Kamata K. Enhancement of mesenteric artery contraction to 5-HT depends on Rho kinase and Src kinase pathways in the ob/ob mouse model of type 2 diabetes. Br J Pharmacol. 2010;160(5):1092–104. doi:10.1111/j.1476-5381.2010.00753.x.
Tawfik HE, El-Remessy AB, Matragoon S, Ma G, Caldwell RB, Caldwell RW. Simvastatin improves diabetes-induced coronary endothelial dysfunction. J Pharmacol Exp Ther. 2006;319(1):386–95. doi:10.1124/jpet.106.106823.
Pearson JT, Jenkins MJ, Edgley AJ, Sonobe T, Joshi M, Waddingham MT, et al. Acute Rho-kinase inhibition improves coronary dysfunction in vivo, in the early diabetic microcirculation. Cardiovasc Diabetol. 2013;12:111. doi:10.1186/1475-2840-12-111.
Surma M, Wei L, Shi J. Rho kinase as a therapeutic target in cardiovascular disease. Future Cardiol. 2011;7(5):657–71. doi:10.2217/fca.11.51.
Bailey SR, Mitra S, Flavahan S, Flavahan NA. Reactive oxygen species from smooth muscle mitochondria initiate cold-induced constriction of cutaneous arteries. Am J Physiol Heart Circ Physiol. 2005;289(1):H243–50. doi:10.1152/ajpheart.01305.2004.
Chandra S, Romero MJ, Shatanawi A, Alkilany AM, Caldwell RB, Caldwell RW. Oxidative species increase arginase activity in endothelial cells through the RhoA/Rho kinase pathway. Br J Pharmacol. 2012;165(2):506–19. doi:10.1111/j.1476-5381.2011.01584.x.
Reyland ME. Protein kinase C isoforms: Multi-functional regulators of cell life and death. Front Biosci (Landmark Ed). 2009;14:2386–99.
Avignon A, Sultan A. PKC-B inhibition: a new therapeutic approach for diabetic complications? Diabetes Metab. 2006;32(3):205–13.
Danis RP, Sheetz MJ. Ruboxistaurin: PKC-beta inhibition for complications of diabetes. Expert Opin Pharmacother. 2009;10(17):2913–25. doi:10.1517/14656560903401620.
Lin G, Brownsey RW, Macleod KM. Complex regulation of PKCbeta2 and PDK-1/AKT by ROCK2 in diabetic heart. PLoS One. 2014;9(1):e86520. doi:10.1371/journal.pone.0086520.
Nagareddy PR, Soliman H, Lin G, Rajput PS, Kumar U, McNeill JH, et al. Selective inhibition of protein kinase C beta(2) attenuates inducible nitric oxide synthase-mediated cardiovascular abnormalities in streptozotocin-induced diabetic rats. Diabetes. 2009;58(10):2355–64. doi:10.2337/db09-0432.
Soliman H, Craig GP, Nagareddy P, Yuen VG, Lin G, Kumar U, et al. Role of inducible nitric oxide synthase in induction of RhoA expression in hearts from diabetic rats. Cardiovasc Res. 2008;79(2):322–30. doi:10.1093/cvr/cvn095.
Soliman H, Gador A, Lu YH, Lin G, Bankar G, MacLeod KM. Diabetes-induced increased oxidative stress in cardiomyocytes is sustained by a positive feedback loop involving Rho kinase and PKCbeta2. Am J Physiol Heart Circ Physiol. 2012;303(8):H989–H1000. doi:10.1152/ajpheart.00416.2012.
Xia Z, Kuo KH, Nagareddy PR, Wang F, Guo Z, Guo T, et al. N-acetylcysteine attenuates PKCbeta2 overexpression and myocardial hypertrophy in streptozotocin-induced diabetic rats. Cardiovasc Res. 2007;73(4):770–82. doi:10.1016/j.cardiores.2006.11.033.
Zhen J, Lu H, Wang XQ, Vaziri ND, Zhou XJ. Upregulation of endothelial and inducible nitric oxide synthase expression by reactive oxygen species. Am J Hypertens. 2008;21(1):28–34. doi:10.1038/ajh.2007.14.
Liu Y, Lei S, Gao X, Mao X, Wang T, Wong GT, et al. PKCbeta inhibition with ruboxistaurin reduces oxidative stress and attenuates left ventricular hypertrophy and dysfunction in rats with streptozotocin-induced diabetes. Clin Sci (Lond). 2012;122(4):161–73. doi:10.1042/CS20110176.
Zhu LH, Wang L, Wang D, Jiang H, Tang QZ, Yan L, et al. Puerarin attenuates high-glucose-and diabetes-induced vascular smooth muscle cell proliferation by blocking PKCbeta2/Rac1-dependent signaling. Free Radic Biol Med. 2010;48(4):471–82. doi:10.1016/j.freeradbiomed.2009.10.040.
Zhou H, Li YJ, Wang M, Zhang LH, Guo BY, Zhao ZS, et al. Involvement of RhoA/ROCK in myocardial fibrosis in a rat model of type 2 diabetes. Acta Pharmacol Sin. 2011;32(8):999–1008. doi:10.1038/aps.2011.54.
Zenker J, Poirot O, de Preux Charles AS, Arnaud E, Medard JJ, Lacroix C, et al. Altered distribution of juxtaparanodal kv1.2 subunits mediates peripheral nerve hyperexcitability in type 2 diabetes mellitus. J Neurosci. 2012;32(22):7493–8. doi:10.1523/JNEUROSCI.0719-12.2012.
Shimoni Y, Chuang M, Abel ED, Severson DL. Gender-dependent attenuation of cardiac potassium currents in type 2 diabetic db/db mice. J Physiol. 2004;555(Pt 2):345–54. doi:10.1113/jphysiol.2003.055590.
Bubolz AH, Wu Q, Larsen BT, Gutterman DD, Liu Y. Ebselen reduces nitration and restores voltage-gated potassium channel function in small coronary arteries of diabetic rats. Am J Physiol Heart Circ Physiol. 2007;293(4):H2231–7. doi:10.1152/ajpheart.00717.2007.
Li H, Chai Q, Gutterman DD, Liu Y. Elevated glucose impairs cAMP-mediated dilation by reducing Kv channel activity in rat small coronary smooth muscle cells. Am J Physiol Heart Circ Physiol. 2003;285(3):H1213–9. doi:10.1152/ajpheart.00226.2003.
This research was supported by the National Natural Science Foundation of China (Grant NO. 30971240) (http://www.nsfc.gov.cn/).
The dataset(s) supporting the conclusions of this article is (are) included within the article.
ZS contributed to study design and drafting the manuscript. XW, WL, HP, XS, LM, HL contributed to to acquisition of data, or analysis and interpretation of data. HL contributed to revising it critically for important intellectual content. All authors have given final approval of the version to be published.
The study was approved by the Animal Care and Use Committee of Beijing Friendship Hospital, Capital Medical University (Beijing, China).
Department of Heart Center, Capital Medical University Affiliated Beijing Friendship Hospital, Beijing, China
Zhijun Sun, Xing Wu, Weiping Li, Hui Peng, Xuhua Shen & Hongwei Li
Beijing Key Laboratory of Metabolic Disturbance Related Cardiovascular Disease, Beijing, People's Republic of China
Lu Ma & Huirong Liu
Zhijun Sun
Xing Wu
Weiping Li
Hui Peng
Xuhua Shen
Lu Ma
Huirong Liu
Hongwei Li
Correspondence to Hongwei Li.
Sun, Z., Wu, X., Li, W. et al. RhoA/rock signaling mediates peroxynitrite-induced functional impairment of Rat coronary vessels. BMC Cardiovasc Disord 16, 193 (2016). https://doi.org/10.1186/s12872-016-0372-6
Accepted: 28 September 2016
Peroxynitrite
RhoA/ROCK
Coronary artery
Vasoconstriction
Vasodilation
|
CommonCrawl
|
Interpreting the coefficient of a non-binary defined dummy variable
I'm reading an economics paper, where the author is using dummy variables to test for political effects on variables, such as GDP and unemployment. The model is a simple autoregressive model with nothing but a political effect dummy added.
Instead of having a 1 if left-wing party in power, 0 otherwise variable, the author has a variable defined as "takes the value of -1 during the X months of a left-wing administration, 1 during the X months of a right-wing administration and 0 otherwise.
My question is: How would you actually interpret the dummy's coefficient? The paper is very vague and talks about implications but does not refer to coefficients individually much. So if a coefficient on a lagged GDP regression for the dummy is, say -0.76, how would you interpret it?
regression categorical-data interpretation binary-data autoregressive
luchonacho
BalmBalm
$\begingroup$ This could be a regression model with errors following an ARMA process or a so called ARIMAX model, e.g. $y_t = \phi y_{t-1} + z_t + \beta x_t$ where $x_t$ is the covariate and $z_t$ is white noise. The interpretation of $\beta$ depends on the details, so please clarify. See robjhyndman.com/hyndsight/arimax $\endgroup$
– Jarle Tufto
The political variable is defined for three categories: "left-wing", "right-wing", and "otherwise".
If the researcher wanted to be completely agnostic, and let the data speak the most, she would include a dummy for each category (or two categories if a constant is used). For example, the model could include a constant, a dummy equal to 1 if the government is "right-wing", and a dummy equal to 1 if the government is "otherwise". The base category, reflected by the constant, would be that the government is "left-wing". In this case, the value of the constant reflects the effect of a "left-wing" government on GDP, whereas each dummy's coefficient indicates the difference of the political ideology of the government on GDP compared with the base category, i.e. the "left-wing" government. This approach allows for political ideologies to have any effect on GDP. For example, it could be that the third category - "otherwise" - has exactly the same effect that a "left-wing" government. This is the case when it's coefficient is zero.
In the paper you mention, the author is assuming that the differential effect of political ideology on GDP between the three categories is the same. This is, that a ceteris paribus change in political ideology from "left-wing" to "otherwise", and a ceteris paribus change in political ideology from "otherwise" to "right-wing" has exactly the same effect on GDP. Even more, she is assuming that this effect has a certain order ("left-wing", "otherwise", "right-wing"). In effect, this assumption is obtained from adding restrictions on the coefficients in the agnostic model (more precisely, is assuming in the agnostic model that the coefficient in "right-wing" dummy is twice the coefficient in the "otherwise" dummy). As such, it is a more limited approach. From your question we don't whether that decision is based on a pre-test or not, but to me seems to be quite a restrictive approach, and probably unnecessary if enough degrees of freedom are available.
$\begingroup$ One could accommodate both viewpoints by choice of coding. That is, by taking two contrasts -- first the contrast representing the linear restriction and second the remaining orthogonal one (which measures the difference between the middle category and the average of the other two, or in unbalanced designs perhaps a weighted version of that kind of comparison). If the first component explains most of the variation in the "agnostic" model it's close to linear, while if the second component is large, it is not. $\endgroup$
Not the answer you're looking for? Browse other questions tagged regression categorical-data interpretation binary-data autoregressive or ask your own question.
Omitted dummy variable coefficient in OLS
spss: working with two binary/dummy variables
Interpreting the change of significance when adding an independent dummy variable
Dummy variable interaction
Interpreting Coefficients of a Dummy variables derived from an Ordinal variable
Interpreting interaction effect between two dummy variables
Interpreting the intercept in logistic regression with a binary variable
|
CommonCrawl
|
Multimodality MRI-based radiomics for aggressiveness prediction in papillary thyroid cancer
Zedong Dai1,
Ran Wei1,
Hao Wang1,
Wenjuan Hu1,
Xilin Sun1,
Jie Zhu1,
Hong Li1,
Yaqiong Ge2 &
Bin Song1
To investigate the ability of a multimodality MRI-based radiomics model in predicting the aggressiveness of papillary thyroid carcinoma (PTC).
This study included consecutive patients who underwent neck magnetic resonance (MR) scans and subsequent thyroidectomy during the study period. The pathological diagnosis of thyroidectomy specimens was the gold standard to determine the aggressiveness. Thyroid nodules were manually segmented on three modal MR images, and then radiomics features were extracted. A machine learning model was established to evaluate the prediction of PTC aggressiveness.
The study cohort included 107 patients with PTC confirmed by pathology (cross-validation cohort: n = 71; test cohort: n = 36). A total of 1584 features were extracted from contrast-enhanced T1-weighted (CE-T1 WI), T2-weighted (T2 WI) and diffusion weighted (DWI) images of each patient. Sparse representation method is used for radiation feature selection and classification model establishment. The accuracy of the independent test set that using only one modality, like CE-T1WI, T2WI or DWI was not particularly satisfactory. In contrast, the result of these three modalities combined achieved 0.917.
Our study shows that multimodality MR image based on radiomics model can accurately distinguish aggressiveness in PTC from non-aggressiveness PTC before operation. This method may be helpful to inform the treatment strategy and prognosis of patients with aggressiveness PTC.
Thyroid cancer is one of the most common malignant tumor in the head and neck [1]. The histological types of this disease include papillary carcinoma, myeloid carcinoma and follicular carcinoma. This disease is not easy to be found at the time of onset, and its slow course causes most patients to be accompanied by aggressiveness. There are 15 subtypes of papillary carcinoma [2], and the histological characteristics, imaging characteristics and prognosis of different subtypes are different [3]. Accompanied by local invasion, extraglandular invasion, lymph node metastasis and distant organ metastasis, when any of the above conditions occurs, it is determined as invasive thyroid cancer [4, 5]. The invasive subtypes considered in the 2015 American Thyroid Association Management Guidelines for Adult Patients [6] include high cell subtype, columnar cell subtype and shoe nail subtype. Some studies [7] show that the diffuse sclerosis subtype also belongs to aggressiveness subtype. At present, surgical resection is needed for most aggressive thyroid cancer, and the prognosis is relatively poor. Therefore, early diagnosis and identifying whether thyroid cancer is aggressive is of great significance. Nowadays, there are many research mechanisms for aggressive thyroid cancer, among which pathological tissue biopsy is the recognized gold standard for diagnosis [8]. However, whether the puncture results are satisfactory is affected by many factors, such as the size and location of thyroid nodules, the presence or absence of calcification and liquefaction in the nodes, also the experience of operators and cytologists. And in the implementation process, it will also cause a certain degree of trauma to patients and increase the risk of bleeding and infection. Therefore, it is of great significance to develop a non-invasive method to automatically identify the aggressiveness of thyroid cancer.
At present, the conventional imaging examination methods of thyroid cancer include ultrasound, computed tomography (CT) and magnetic resonance imaging (MRI). Ultrasound is the preferred imaging examination method in the diagnosis of thyroid cancer, which has the characteristics of fast, real-time dynamic, no radiation and high resolution. However, due to the influence of neck bone and air, it is kind of difficult to distinguish the difference between blood flow and the echo of surrounding tissue. And the accuracy of ultrasound in evaluating deep neck structure still needs more researches [9, 10]. So when it comes to an effective remedy for weaknesses in ultrasound, people naturally think of CT and MRI. In terms of CT, it can show the relationship between the anatomical location, morphology, and surrounding tissues of thyroid cancer, however, it is not without faults, like radiation, which limits the application scope of clinicians. While MRI has excellent sensitivity in the diagnosis of thyroid cancer because of its high resolution to soft tissue. Through multi sequence scanning of the nidi, clear images of the nidi and adjacent tissues can be obtained, and the influence of subcutaneous fat on the image quality of patients can be avoided.
Radiomics comes from computer-aided detection or diagnosis (CAD), which combine image quantitative analysis with machine learning method [11]. At present, the basic function of radiomics is to analyze the tumor region of interest (ROI) quantitatively through a large number of imaging features, so as to provide valuable diagnosis, prognosis or prediction information. And the purpose of radiomics is to explore and use these information resources to develop suitable radiomics models for diagnosis, prediction, or prognosis, to support personalized clinical decision-making and improve individualized treatment options. MRI excels in soft tissue imaging and can provide high contrast structural and functional information. Diffusion weighted imaging (DWI) and dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) can reflect tissue cell structure and angiogenesis. Through the acquisition of these images, more effective imaging features can be extracted.
However, there are few reports on the application of radiomics to evaluate aggressiveness in papillary thyroid carcinoma (PTC) based on MRI, which indicates that there is a certain space for research in this field of knowledge. Therefore, MRI based radiomics technique may provide a noninvasive and accurate method for predicting thyroid aggressiveness in patients with PTC. This work aims to evaluate whether it is possible to detect thyroid aggressiveness in PTC by using multimodality MRI based radiomics method.
The current retrospective trial evaluated patients with continuous thyroid nodules first detected by ultrasound from January 2018 to March 2019. According to the American Society of Radiology thyroid imaging, reporting and data system [12], the grade of tumor was TR3-TR5. All patients underwent multi parameter MRI, followed by thyroid surgery, subtotal or total thyroidectomy within 1 week after MRI. PTC was confirmed by pathology. The exclusion criteria were: (1) pathological diagnosis did not reflect PTC; (2) Tumor size < 5 mm; (3) There was no correlation between the pathological data of tumor specimens and the results of magnetic resonance imaging; (4) Poor image quality. Finally, 107 cases were evaluated.
The study was approved by our local institutional ethics committee.
MRI acquisition
All patients were scanned on the excite HD 1.5 T scanner (GE Healthcare, USA), which included an 8-channel special neck surface coil using the same scanning protocol. The applied parameters were as follows: axial T2-weighted (T2WI) fast recovery fast spin-echo with fat suppression with echo time (TE) of 85 ms, repetition time (TR) of 1280 ms, and slice thickness of 4–5 mm, matrix of 288 × 192, spacing of 1 mm, field of view (FOV) of 18 cm, and a number of excitations (NEX) of 4; contrast-enhanced axial T1WI (CE-T1WI) with multiphase utilizing a fast-spoiled gradient recalled echo sequence, which TE of 1.7 ms, TR of 5.7 ms, matrix of 192 × 256, FOV of 14 cm, and NEX of 1;DWI with a single-shot echo planar imaging (EPI) sequence, with minimal TE, and TR of 6550 ms, slice thickness of 4–5 mm, matrix of 128 × 128, spacing of 0.5 mm, FOV of 14 cm, and NEX of 4 (b value, 800 s/mm2). The magnevist contrast agent from Bayer healthcare was administered intravenously at 3 ml / S (0.2 ml/kg), and then rinsed with 20 ml normal saline. Scanning was performed at 30, 60, 120, 180, 240 and 300 s after contrast agent administration to obtain images of six stages including breath-holds. The spatial saturation band was used to remove signals generated by covering fat and surrounding tissues.
Histopathologic analysis
Surgical tumor cases were evaluated and analyzed by experienced pathologists who have been engaged in relevant research for more than 10 years. Tumor specimens were paraffin embedded and sectioned, and stained with hematoxylin eosin (H&E). And then the established criteria were used to assess thyroid aggressive characteristics [13, 14]. Finally, all patients were divided into non-aggressiveness group and aggressiveness group.
MRI radiomics
Tumor segmentation
Tumor segmentation is the key step of subsequent high-throughput feature extraction and quantitative analysis. In this paper, the segment editor part of 3D slicer software is used to segment the focus area of thyroid cancer. It is a module for segmentation, which can subdivide and depict the region of interest. Two senior radiologists manually marked them separately and discussed repeatedly to obtain the final results in case of disagreement. The largest tumor of each patient was selected in order to reduce the potential differences of multiple tumors in the same individual, which greatly improve the applicability of the results. And to reduce the impact of segmentation accuracy on model performance, we give up including the cases with disagreement segmentation in the experimental dataset. Figure 1 shows the results of three modal MRI, the first row is the images of three MRI modalities, and the second row corresponds to the segmentation results.
Segmentation results of CE-T1WI, T2WI and DWI modalities, the red area in the second line represents the segmentation result
Radiomics feature extraction
Radiomics feature is the basic attribute description of class, and feature extraction is the basis of classification work. In this paper, we extract a total of 1584 features from CE-T1WI, T2WI and DWI modalities, with 528 for each modality. The 528 features include: 18 intensity features, 15 shape features, 39 texture features (8 GLCM, 13 GLRLM, 13 GLSZM and 5 NGTDM) and 456 (8 wavelet submaps * 57 intensity and texture features) wavelet features. More details about these features please refer to Appendix 1. And this part of work was completed by MATLAB.
Radiomic feature selection
The problem considered in feature selection is to make the features sparse, that is, some redundant features are removed through this step, so as to reduce the computational cost. The input factors of the model are reduced, and the input–output relationship established by the model will be clearer, so the interpretability of the model can be improved. In this work, sparse representation is used to select a few crucial features for the following classification. The sparse representation-based feature selection model can be written as:
$${\hat{\text{w}}} = \mathop {{\text{argmin}}}\limits_{{{\rm w}}} {\text{y}} - {\text{Fw}}\left\| {_{2}^{2} + {\upgamma }} \right\|{\text{w}}_{0}$$
where \(\mathrm{y}\in {\mathrm{R}}^{\mathrm{m}}\) is the training sample label, \(\mathrm{m}\) is the number of training samples, \(\mathrm{F}=[{\mathrm{f}}_{1},{\mathrm{f}}_{2}\cdots {\mathrm{f}}_{\mathrm{m}}{]}^{\mathrm{T}}\in {\mathrm{R}}^{\mathrm{m}\times 2\mathrm{K}}\) is the training sample feature set, \(\upgamma\) is the sparse control parameter. The absolute value of each element in the representation coefficient \(\mathrm{w}\) indicates the importance of the corresponding feature. Once the \(\mathrm{w}\) has been calculated, we sort the features in descending order of importance according to the corresponding absolute value of the elements in \(\mathrm{w}\). Finally, we select the optimal subset of features using a sequential advance method based on cross-validation (on the cross-validation set). Specifically, the first 5 features are selected as the initial feature subset, and then the 6th to 100th features are put into the subset in turn. And the accuracy of cross-validation is calculated whenever the feature subset is updated. The subset with the highest accuracy is selected as the optimal subset.
Model construction and validation
Model building is the herald of the data analysis stage. According to the results of feature selection in the previous step, we use a sparse representation method to establish a prediction model for the classification of aggressiveness and non-aggressiveness thyroid cancer. Specifically, suppose \(\mathbf{F}= [{\mathbf{F}}_{1}\cdots {\mathbf{F}}_{\mathbf{c}}{\cdots \mathbf{F}}_{\mathbf{C}}]\) denotes the feature set of training samples from \(\mathbf{C}\) classes, and \({\mathbf{F}}_{\mathbf{c}}\) is the sample feature set of class c. The first step of sparse representation can be formulated as:
$$\left\{{\varvec{\Psi}},{\varvec{\Phi}}\right\}=\underset{{\varvec{\Phi}},{\varvec{\Psi}}}{\mathrm{argmin}}{\sum }_{\mathrm{c}-1}^{\mathrm{C}}{\Vert {\mathbf{F}}_{\mathbf{c}} -{{\varvec{\Psi}}}_{\mathbf{c}}{{\varvec{\Phi}}}_{\mathbf{c}}{\mathbf{F}}_{\mathbf{c}} \Vert }_{\mathrm{F}}^{2}+\uplambda {\Vert {{\varvec{\Phi}}}_{\mathbf{c}}\overline{{\mathbf{F} }_{\mathbf{c}}}\Vert }_{\mathrm{F}}^{2},\mathrm{ s}.\mathrm{t}.{\Vert {\mathrm{\varphi }}_{\mathrm{q}}\Vert }_{2}^{2}\le 1$$
where is \(\uplambda\) a scalar constant, \(\overline{{\mathbf{F} }_{\mathbf{c}}}\) is the complementary matrix of \({\mathbf{F}}_{\mathbf{c}}\). Dictionary pair \({\varvec{\Psi}}\) and \({\varvec{\Phi}}\) are used to reconstruct and code \(\mathbf{F}\), respectively. \({\mathrm{\varphi }}_{\mathrm{q}}\) is an atom of dictionary \({\varvec{\Psi}}\). When the dictionary pair \({\varvec{\Psi}}\) and \({\varvec{\Phi}}\) are learned, the classification model can be formulated as:
$${\mathrm{l}}_{\mathrm{i}}=\underset{\mathrm{c}}{\mathrm{argmin}}{\Vert {\mathrm{f}}_{\mathrm{i}}-{{\varvec{\Psi}}}_{\mathrm{c}}{{\varvec{\Phi}}}_{\mathrm{c}}{\mathrm{f}}_{\mathrm{i}}\Vert }_{2},\mathrm{ c}\in [1,\cdots ,\mathrm{ C}]$$
where \({\mathrm{l}}_{\mathrm{i}}\) is the class label of testing case \(\mathrm{i}\), and \({\mathrm{f}}_{\mathrm{i}}\) is the feature of \(\mathrm{i}\).
In our experiments, \(\upgamma\) and \(\uplambda\) were set to 0.1 and 0.01, respectively. The 107 cases were randomly divided into cross-validation (71) and testing (36) sets in a ratio of about 2: 1. The cross-validation set was used for feature selection in advance. When the number of features is determined, we directly use the cross-validation set to establish a sparse representation classification model and test the testing set on it. We compare the classification performance of each modality as well as the combination of three modalities. Here we use the simplest modality combination method, that is, direct concatenating the features of three modalities. The classification models were evaluated by calculating the subject operating characteristic curve (ROC), accuracy (ACC), sensitivity (SEN), specificity (SPE), negative predictive value (NPV) and positive predictive value (PPV).
Patient feature and selection of the study cohort
A total of 107 patients were evaluated. According to the pathological results, they were classified into aggressiveness group and non-aggressiveness group. Among them, 51 patients were aggressiveness group, with an average age of 42.37 ± 14.27 years (12–73 years), and 56 patients were non- aggressiveness group, with an average age of 46.68 ± 13.86 years (22–77 years). Table 1 summarizes the clinical characteristics of PTC cases registered in this study. In addition to age, gender, the lesion diameter, location, metastasis and multifocal cancer were also included in our study. The gender, lesion diameter and LN metastasis were statistically significant, which was consistent with the results of previous studies. The cross-validation cohort including 71 cases, while the test cohort including 36 cases.
Table 1 Patient features in the aggressiveness and non-aggressiveness groups
Feature selection
A total of 528 high-throughput features were extracted from CE-T1WI, T2WI and DWI modalities, respectively. In order to verify the effectiveness of these features, we first selected the features with P < 0.001 (with extremely significant statistical significance) by comparing the P values of t-test, and then performed unsupervised clustering on these features. Figure 2 shows the confusion matrix of the clustering results. Through feature unsupervised clustering, 78.83% (79/107) of the cases were correctly classified, which demonstrates that these features are conducive to aggressiveness classification. After sparse representation-based feature selection, 75 features are used for final testing set classification.
Confusion matrix of the clustering results. (Agg is the abbreviation of aggressiveness)
predicted model
All the results of our model are shown in the Tables 2 and 3 below. It can be seen that in the cross-validation set data, the ACC of CE-T1WI, T2WI and DWI modalities alone are 0.803, 0.817 and 0.887, respectively, while the cross-validation result of the combination of the three modalities is as high as 0.930, and the sensitivity and specificity are 0.912 and 0.946, respectively. Great results were also obtained in the final independent test set. The predicted ACC of CE-T1WI, T2WI and DWI alone were 0.778, 0.778 and 0.861, respectively, while the ACC of combining the three modalities were 0.917, and the sensitivity and specificity were 0.912 and 0.946, respectively. The above results show that our proposed model combining three modalities to predict whether the thyroid is aggressive is effective.
Table 2 The results of cross-validation set data
Table 3 The results of independent test set data
The ROC curves based on CE-T1WI modal features, T2WI modal features, DWI modal features and the features combined three modalities at the same time are shown in Fig. 3. The blue curve represents the results of the cross validation set, and the yellow curve represents the results of the independent test set. It can also be seen from the figure that the area under the ROC curve of the combined modality is larger than that of the other three separate modalities. However, the comparisons of ROC curves based on Delong test show that the combined model is only better than T1 (P = 0.05) and T2 (P = 0.05) models alone. And the difference between the results of the combined model and the DWI model is not statistically significant (P = 0.70).
ROCs for the CE-T1WI, T2WI, DWI and combined model in predicting aggressive and non-aggressive tumors in the cross-validation and test cohort
The combined model finally uses 75 features from three modalities to achieve an ACC of 0.917 and AUC of 0.960. Among the 75 features, there are 23 CE-T1WI modal features, 26 T2WI modal features and 26 DWI modal features, which indicates that the three modal images play an important role in the prediction of thyroid invasiveness. Among the 23 features of CE-T1WI modality, there are 2 shape features, 5 Gy features and 16 texture features. Among the 26 features of T2WI modality, there are 3 shape features, 5 Gy features and 18 texture features. Among the 26 features of DWI modality, there are 0 shape features, 10 Gy features and 16 texture features. The texture features of images play an important role in classification. These texture features describe distributions and relationships of image pixels, which can better reflect internal spatial heterogeneity of the lesions [15, 16].
In order to further analyze the proposed model, Fig. 4 gives the change of the model classification accuracy with the increase of the number of features. It can be seen that in a certain range, the accuracy of the model increases with the increase of the number of features, which highlights the effectiveness of feature screening. With the further increase of the number of features, the accuracy of the model begins to decline, indicating that some redundant features begin to appear in the feature subset [17]. In Fig. 5 we visually compare the distribution of 5 features with the smallest P values in CE-T1WI, T2WI and DWI modalities through boxplots. The P values of t-test of these features are less than 0.001, indicating that these features have extremely significant statistical significance in classification tasks. It can also be clearly seen from the box diagram that these features of the positive and negative groups of cases are significantly different.
Variation of model accuracy with the number of features
The top 5 features of importance in the classification task, CE-T1WI, T2WI and DWI, respectively
Our research shows that the machine-learning-based radiomics prediction model based on the fusion of three modalities of MRI is expected to become a noninvasive, convenient, and rapid method to evaluate the aggressiveness and non-aggressiveness of thyroid cancer. The 2015 ATA guidelines are stricter in the management of differentiated thyroid cancer, and different clinical treatment methods are adopted according to the risk assessment. Therefore, it is particularly important to comprehensively and accurately evaluate thyroid cancer before treatment. At present, the gold standard is the histopathological results of thyroid fine needle puncture, and the pathological diagnosis generally takes more than 24 h. However, using machine learning-based image analysis can predict whether it is aggressive for thyroid patients in a non-invasive and rapid way, which can not only reduce the pain of patients but also greatly shorten the diagnosis time. Therefore, the model is helpful for clinicians to design treatment methods.
MRI is widely used in tumor diagnosis because of its non-invasive and radiation-free characteristics. Based on medical image data mining technology, imaging omics quantifies tumors as high-throughput features, and then establishes the complex correlation between these features and many indicators of disease occurrence, development, and prognosis, so as to improve the accuracy of disease diagnosis and treatment efficiency. At present, the imaging research reports on thyroid cancer are mainly established on ultrasound and CT, and there are few MRI related studies. MRI has a high resolution of soft tissue density, and can accurately display the size, range, location, lymph node metastasis and the relationship with surrounding tissues and organs [18, 19]. Ma et al. [20] found that the radiological characteristics of T2WI data can predict the pathological extracapsular expansion of prostate cancer patients. DWI is the only noninvasive examination method to reflect the diffusion of living tissue at present [21]. A meta-analysis [22] shows that quantitative DWI is an accurate method to distinguish benign and malignant thyroid nodules, with noninvasive, radiation-free, sensitivity of 90% and specificity of 95%. In this paper, we investigate the performance of these three modalities and their combinations on the thyroid cancer aggressiveness prediction task.
In this study, conventional MRI sequences (CE-T1WI, T2WI) and functional imaging sequences (DWI images) were included in the study at the same time, and a multimodal imaging radiomics method was proposed to predict the aggressiveness of thyroid. Firstly, we extracted 528 high-throughput features including shape, intensity, texture and wavelet from CE-T1WI, T2WI and DWI respectively. Then, the sparse representation method was used to filter the combined 1584 features. Due to the limited number of samples in this study, the sparse representation classifier based on nonparametric training is selected for classification, so as to reduce the risk of the model overfitting [23].
Lu et al. [24] showed tumor invasiveness was evaluated by determining the ADC threshold obtained by preoperative DW-MRI (AUC, 0.85). Hu et al. [25] also used the histological characteristics of extrathyroidal extension as a tool to predict aggressiveness, showed that the AUC of the mean ADC500 value was 0.905, the ADC300 value was 0.607 and ADC800 values were 0.770 in differentiating ETE from without ETE (p < 0.001) respectively. Without the aid of ETE histological features, we directly extracted radiomics features from multimodal MRI images for modeling, and the results were better than those in [24, 25], indicating that the radiomics model based on CE-T1WI, T2WI and DWI image features has outstanding ability to predict the invasiveness of thyroid cancer. It also proved that imaging radiomics is a new non-invasive diagnostic method. Extracting high-throughput features from medical images and establishing appropriate models can be used as a tool to predict thyroid invasiveness.
There are some deficiencies in this study. Firstly, in terms of data preprocessing, ROI regions are divided manually, which is a time-consuming process. In the future, we can try to use the method of deep learning to realize automatic segmentation. Secondly, for the radiomics model, we directly splice the features extracted from each modality for the radiomics model. Although multimodal information has been applied in model prediction, it is difficult to effectively capture the deep correlation between modalities. In future work, we will establish a multimodal classifier to integrate multimodal related information in the classification process. Third, in terms of experimental data, this study only carries out experimental verification on single center data. Although we strictly divide the training and test sets, the stability and robustness of the model still need to be verified on multi center, multi parameter and multi device data sets. Therefore, in future work, we will further study the stability of multicenter data model.
The datasets analyzed in this study are available from the corresponding author on request.
Rongzhong H, Jiang Lihong XuYu, et al. Comparative diagnostic accuracy of contrast-enhanced ultrasound and shear wave elastography in differentiating benign and malignant lesions: a network meta-analysis. Front Oncol. 2019;9:102.
Lam KY. Pathology of endocrine tumors update: World Health Organization new classification 2017—other thyroid tumors. AJSP Rev Rep. 2017;22:209–16.
Janovitz T, Barletta JA. Clinically relevant prognostic parameters in differentiated thyroid carcinoma. Endocrine Pathol. 2018;29(4):357–64.
Paparodis RD, Bantouna D, Imam S, et al. The non-interventional approach to papillary thyroid microcarcinomas. An "active surveillance" dilemma. Surg Oncol. 2019;29:113–7.
Ohkuwa K, Sugino K, Nagahama M, et al. Risk stratification in differentiated thyroid cancer with RAI-avid lung metastases. Endocrine Connect. 2021;10:825–33.
Haugen BR, Alexander EK, Bible KC, et al. American Thyroid Association Management Guidelines for adult patients with thyroid nodules and differentiated thyroid cancer: The American Thyroid Association Guidelines task force on thyroid nodules and differentiated thyroid cancer. Thyroid. 2015;2015:2165.
Lam AKY, Lo CY, et al. Diffuse sclerosing variant of papillary carcinoma of the thyroid: a 35-year comparative study at a single institution. Ann Surg Oncol. 2006;13(2):176–81.
Chen L, Chen L, Liu J, et al. The association among quantitative contrast-enhanced ultrasonography features, thyroid imaging reporting and data system and BRAF V600E mutation status in patients with papillary thyroid microcarcinoma. Ultrasound Quart. 2018.
Haugen BR. 2015 American Thyroid Association Management Guidelines for adult patients with thyroid nodules and differentiated thyroid cancer: what is new and what has changed. Cancer. 2017;123(3):372–81.
Liang J, Huang X, Hu H, Liu Y, Zhou Q, Cao Q, Wang W, Liu B, Zheng Y, Li X, et al. Predicting malignancy in thyroid nodules: radiomics score versus 2017 American College of Radiology Thyroid Imaging, Reporting and Data System. Thyroid. 2018;28(8):1024–33.
Huynh BQ, Li H, Giger ML. Digital mammographic tumor classification using transfer learning from deep convolutional neural networks. J Med Imaging. 2016;3(3):034501.
Griethuysen J, Fedorov A, Parmar C, et al. Computational radiomics system to decode the radiographic phenotype. Can Res. 2017;77(21):e104–7.
Kazaure HS, Roman SA, Sosa JA. Aggressive variants of papillary thyroid cancer: incidence, characteristics and predictors of survival among 43,738 patients. Ann Surg Oncol. 2012;19(6):1874–80.
Hu MJ, He JL, Tong XR, et al. Associations between essential microelements exposure and the aggressive clinicopathologic characteristics of papillary thyroid cancer. Biometals. 2021;34(4):909–21.
Zhang H, Hung CL, Min G, Guo JP, Liu M, Hu X. GPU-accelerated GLRLM algorithm for feature extraction of MRI. Sci Rep. 2019;9(1):10883.
Arebey M, Hannan MA, Begum RA, Basri H. Solid waste bin level detection using gray level co-occurrence matrix feature extraction approach. J Environ Manag. 2012;104:9–18.
Guoqing Wu, Wang Y, Jinhua Yu. A sparse representation-based radiomics for outcome prediction of higher-grade gliomas. Med Phys. 2019;46(1):250–61.
Lee DH, Kang WJ, Seo HS, et al. Detection of metastatic cervical lymph nodes in recurrent papillary thyroid carcinoma: computed tomography versus positron emission tomography-computed tomography. J Comput Assist Tomogr. 2009;33:805–10.
Sakai O, Curtin HD, Romo LV, et al. Lymph node pathology: benign proliferative, lymphoma, and metastatic disease. Radiol Clin North Am. 2000;38(5):979–98.
Ma S, Xie H, Wang H, Yang J, Han C, Wang X, Zhang X. Preoperative prediction of extracapsular extension: radiomics signature based on magnetic resonance imaging to stage prostate cancer. Mol Imaging Biol. 2020;22(3):711–21.
Chen L, Xu J, Bao J, et al. Diffusion-weighted MRI in differentiating malignant from benign thyroid nodules: a meta-analysis. BMJ Open. 2016;6(1):e008413.
Razek AA, Sadek AG, Kombar OR, et al. Role of apparent diffusion coefficient values in differentiation between malignant and benign solitary thyroid nodules. Am J Neuroradiol. 2008;29(3):563–8.
Wu G, Chen Y, Wang Y, Yu J, et al. Sparse representation-based radiomics for the diagnosis of brain tumors. IEEE Trans Med Imaging. 2018;37(4):893–905.
Lu Y, Moreira AL, Hatzoglou V, et al. Using diffusion-weighted MRI to predict aggressive histological features in papillary thyroid carcinoma: a novel tool for pre-operative risk stratification in thyroid cancer. Thyroid. 2015;25(6):672–80.
Hu S, Zhang H, X Wang, et al. Can diffusion-weighted MR imaging be used as a tool to predict extrathyroidal extension in papillary thyroid carcinoma. Acad Radiol. 2020.
We thank all members of the Department of Radiology, Pathology and General Surgery (Minhang Hospital, Fudan University) for constructive advice in manuscript preparation.
This research was funded by the Shanghai Municipal Health and Family Planning Commission (202140325), the Shanghai Minhang Science and Technology Commission (2020MHZ048) and Natural Science Foundation of Shanghai (19ZR1446200).
Department of Radiology, Minhang Hospital, Fudan University, 170 Xinsong Road, Shanghai, 201199, People's Republic of China
Zedong Dai, Ran Wei, Hao Wang, Wenjuan Hu, Xilin Sun, Jie Zhu, Hong Li & Bin Song
GE Healthcare, Shanghai, People's Republic of China
Yaqiong Ge
Zedong Dai
Ran Wei
Hao Wang
Wenjuan Hu
Xilin Sun
Jie Zhu
Bin Song
ZD, RW and BS conceived and designed this study. ZD wrote the main manuscript text. RW, HW, WH, XS, JZ, HL and YG prepared all figures and tables. All authors reviewed the manuscript. All authors read and approved the final manuscript.
Correspondence to Bin Song.
The Institutional Review Board of Minhang Hospital approved this retrospective study and waived the requirement for written informed consent due to its retrospective nature. The study was conducted in accordance with the Declaration of Helsinki.
Table 4 The summary of 528 features
Dai, Z., Wei, R., Wang, H. et al. Multimodality MRI-based radiomics for aggressiveness prediction in papillary thyroid cancer. BMC Med Imaging 22, 54 (2022). https://doi.org/10.1186/s12880-022-00779-5
Radiomics
Papillary thyroid carcinoma
Multimodality MRI
Sparse representation
|
CommonCrawl
|
The Ultrasound Journal
Vertical displacement of pleura: a new method for bronchospasm evaluation?
Sara Raquel Martins ORCID: orcid.org/0000-0003-0978-57241 &
Ramon Nogué2
The Ultrasound Journal volume 12, Article number: 42 (2020) Cite this article
Lung ultrasonography has been increasingly recognized has a valuable diagnostic tool. In adult patients with asthma/chronic obstructive pulmonary disease and wheezing, LUS usually presents as an A/nude profile (normal profile, with sliding and A-lines, and without any abnormal findings) or at most reveals a decrease/absence of lung sliding. Therefore, until now simple point-of-care ultrasonography appeared to be unable to assess the severity of airflow limitation.
We report the case of a woman presenting to the emergency department with an asthma exacerbation. Bedside ultrasound showed the usual A/normal profile, but also an associated vertical pleural displacement, probably secondary to hyperinflation and accessory muscle recruitment. We evaluated the described movement with M-mode and established a comparison index between end-inspiration and end-expiration, using the skin as reference. This index showed improvement and complete normalization during treatment.
Pleural vertical displacement appears to be a sonographic alteration associated to bronchospasm and accessory muscle recruitment. It is easily identifiable and measurable on LUS, thus possibly representing a new method to evaluate bronchospasm and monitoring treatment response. Further research is needed to confirm or refute this finding.
Over the past few years lung ultrasonography (LUS) has been increasingly recognized has a valuable diagnostic tool. Ultrasound innocuousness, combined with its fast learning curve, portability and low cost have resulted in its ubiquitous use in almost any setting [1,2,3,4,5,6]. International evidence-based recommendations for point-of-care lung ultrasound, published in 2012, set foundations to a more regulated use [7]. Since then, several studies reported its applications in a large range of medical conditions.
LUS relies on the interpretation of four fundamental findings: pleural sliding; presence of artifacts arising from the pleural line that are generated by the pleura itself (A-lines) or by alterations of the fluid–air composition of the interstitia and alveoli (B-lines); direct visualization of condensed subpleural pulmonary tissue with variable aeration; and detection of pleural effusion.
The first step of LUS is to identify the "bat sign", corresponding to a hyperechogenic line lying between two adjacent ribs. Such hyperechogenic line corresponds to the pleural line. LUS first evaluation is to check if the pleural line displays lung sliding (the impression of horizontal movement produced by the visceral pleura sliding over parietal pleura during breathing, and represented on M-mode by the "seashore sign"), implying the absence of liquid or air between the two pleural layers at the explored area. Operator should then identify the reverberation artifacts, presenting either as A-lines (horizontal lines parallel to the pleural line and caused by its own reverberation) or B-lines (vertical lines that may be representative of interstitial syndrome) [1, 8, 9].
In adult patients with asthma/chronic obstructive pulmonary disease (COPD) presenting with wheezing, LUS usually shows as an A/nude profile (normal profile, with sliding and A-lines, without any other findings), or at most reveals a decrease in the intensity/absence of pleural sliding due to over-tension. However, this is not only unspecific (as it can be associated with other conditions, most notably pneumothorax), but also very difficult to quantify [1, 10, 11]. Therefore, until this moment, simple point-of-care ultrasonography (POCUS) appeared to be unable to assess the severity of airflow limitation. We present a possible new sonographic method to evaluate airflow impairment and monitoring treatment response.
We report the case of an obese (BMI of 35) 70-year-old woman, with known history of asthma with frequent exacerbations, in spite of treatment with inhaled corticosteroids and long-acting bronchodilators. She presented in the Emergency Department (ED) breathless, with diffuse wheezing, tachypnea (30/min), room air SpO2 90%, and tachycardia (110 bpm) with normal blood pressure.
A bedside LUS was performed at both apices, with a SONOSITE ® turbo ultrasound system, using a straight linear array probe, with depth setting of 4 cm and soft tissue preset. As the "bat sign" was localized and pleural sliding observed, vertical displacement of the pleural line with each breath (Fig. 1) was noted, probably secondary to hyperinflation and accessory muscle recruitment and its direct effects on parietal pleura. We evaluated the described movement with M-mode and established a comparison index between end-inspiration (A) and end-expiration (B), using the skin as reference:
$$ \frac{{{\text{Skin-to-maximal inspiration point distance}}\left( {\text{A}} \right) \, {-}{\text{ skin-to-maximal expiration point distance}}\left( {\text{B}} \right)}}{{{\text{skin-to-maximal inspiration point distance}}\left( {\text{A}} \right)}} \times 100. $$
M-mode evaluation of pleural vertical displacement and calculus of the comparison index: \( \frac{{{\text{Skin-to-maximal inspiration point distance}}\left( {\text{A}} \right) \, {-}{\text{ skin-to-maximal expiration point distance}}\left( {\text{B}} \right)}}{{{\text{skin-to-maximal inspiration point distance}}\left( {\text{A}} \right)}} \times 100 \)
The described index measured at admission was 14% (Fig. 1). The patient was then started on usual asthma exacerbation treatment with short-acting bronchodilation and systemic corticosteroids. First re-evaluation, performed at the same point 17 min after treatment administration, showed an index reduction to 6% (Fig. 2). With further treatment, pleural vertical displacement finally disappeared and the index progressed to zero (Fig. 3). Along with the index decrease, symptomatic relief and improved chest auscultation were observed. Peak expiratory flow rate (PEFR) or spirometry were not tested due to lack of patient collaboration, as frequently occurs in the ED.
M-mode re-evaluation of the pleural displacement and index calculation at 17 min of treatment, showing improvement with treatment
M-mode re-evaluation of the pleural displacement and index calculation at 21 min of treatment, showing complete resolution of pleural displacement an index normalization
Asthma and COPD are important causes of morbidity and mortality worldwide. Asthma is characterized by fluctuating symptoms of wheeze, shortness of breath, chest tightness and/or cough and by variable expiratory airflow limitation secondary to airway hyperreactivity and bronchospasm [12]. COPD presents with persistent respiratory symptoms and airflow limitation that is due to airway and/or alveolar abnormalities [13]. Also, asthma/COPD both have in common airflow limitation. During acute exacerbations such flow limitation results in hyperinflation, which seems to be related with sustained post-inspiratory activity of the inspiratory muscles [14]. We hypothesized that hyperinflation and accessory muscle recruitment result in pleural vertical displacement and could explain the findings described. As airflow limitation is reversed, hyperinflation ameliorates and accessory muscles are no longer recruited, thus the pleural vertical displacement will decrease.
We could not find any previous description of such pleural movement in our bibliography research.
Dr. Lichtenstein described two signals detected in M-mode LUS in cases of severe acute dyspnea: the Ifrac and the Nogue-Armendariz phenomena. The Ifrac phenomenon is secondary to accessory respiratory muscle activation, creating a pattern of "muscular sliding" in addition to usual lung sliding. This muscular sliding shows a "seashore pattern" on M-Mode, identifiable above the pleural line, unlike lung sliding that produces a "seashore pattern" under the same line. The Nogue-Armendariz phenomenon represents the rare occurrence of perfect synchrony between such muscular and lung sliding, resulting in a permanent "sand pattern" on M-mode arising at the muscular line and paralleled with the "sand pattern" caused by the movement of the pleural layers [15]. In our case, these phenomena are visible in inspiration during the first evaluation (line A of Fig. 1). Although such findings seem to be present in cases of severe acute dyspnea, they were not correlated with airflow limitation itself and might be difficult to detect in the short time evaluations of the emergency setting.
Some case reports also mentioned absence of B-mode pleural sliding, with loss of its M-mode correspondent "seashore sign" and appearance of "bar-code sign", in cases of severe airflow impairment [10, 11]. This is also assumed to be a consequence of hyperinflation with pleural over-tension. However, such findings are not specific of those diseases, being more commonly associated with pneumothorax (which can itself present as a complication of severe asthma/COPD exacerbations), but also described in other conditions such as atelectasis, pleural adhesions, severe emphysema or severe fibrosis. Furthermore, a reduction/absence of pleural movement cannot be quantified and therefore would not be suitable to assess the degree of bronchospasm and its response to treatment.
Bronchospasm monitorization is difficult even with standard tests. Although COPD and asthma guidelines underline spirometry and/or peak expiratory respiratory flow (PERF) as pivotal tools for diseases diagnosis and monitorization, they also recognize that those tests show low sensitivity and variation according to age. Also, both techniques need patient collaboration and training to a correct measurement, and PERF monitoring did not prove to ameliorate asthma control in addition to symptom score [12, 16, 17], neither could it predict the need of hospital admissions [18].Those features imply that such complementary tests lack practical applicability in the acute setting; and the American College of Emergency Physicians has already released a statement emphasizing that evidence does not support PERF monitoring for all adult asthma patients [19].
Therefore, currently there is an absence of practical and easily performable tests to diagnose and monitor the airflow limitation, particularly in the emergency setting. The pleural displacement index could be a quick, simple method to indirectly monitor airflow impairment at bedside, independent of patient collaboration.
New methods for bronchospasm evaluation in the emergency department are needed, and although LUS has been increasingly used as a diagnostic complement there is no description in medical literature of any specific or suggestive sign of severe airflow limitation identifiable with this technique.
We present a pleural vertical displacement index that might represent a new method for monitoring bronchospasm and measuring the severity of asthma/COPD exacerbation. Being a quick, simple and non-invasive test, this could be performed at patient bedside in the ED or during hospitalization. This might allow an easily performable monitorization of airflow limitation and its response to treatment, more practical than PERF, especially in the exacerbated and breathless patient.
LUS evaluation of pleural vertical displacement in the setting of acute airflow impairment will need further validation. How it will present through the severity spectrum of wheezing patients (from not severe exacerbations to imminent respiratory arrest) is a question still to be answered. Also, it is uncertain what is the minimal percentual point difference that translates into a significant clinical improvement or deterioration, and even if these variations occur simultaneously, before or after other clinical signs. Patients with chronic very severe lung hyperinflation may be particularly challenging as pleural vertical displacement may not vary so much during exacerbations. Finally, it is still unclear if it could also be useful in other causes of acute severe dyspnea.
Data sharing is not applicable to this article as no datasets were generated or analyzed during the current study.
COPD:
LUS:
Lung ultrasonography
PEFR:
Peak expiratory respiratory flow (PERF)
POCUS:
Point-of-care ultrasonography
Lichtenstein D (2017) Novel approaches to ultrasonography of the lung and pleural space: where are we now? Breathe 13:100–111
Nelson BP, Sanghvi A (2016) Out of hospital point of care ultrasound: current use models and future directions. Eur J Trauma Emerg Surg 42:139–150
Scalea TM, Rodriguez A, Chiu WC et al (1999) Focused Assessment with Sonography for Trauma (FAST): results from an international consensus conference. J Trauma 46(3):466–472
Seif D, Perera P, Mailhot T et al (2012) Bedside ultrasound in resuscitation and the rapid ultrasound in shock protocol. Crit Care Res Pract 2012:503254
Jensen MB, Sloth E, Larsen KM, Schmidt MB (2004) Transthoracic echocardiography for cardiopulmonary monitoring in intensive care. Eur J Anaesthesiol 21(9):700–707
See KC, Ong V, Wong SH et al (2016) Lung ultrasound training: curriculum implementation and learning trajectory among respiratory therapists. Intensive Care Med 42:63–71
Volpicelli G, Elbarbary M, Blaivas M, Lichtenstein DA et al (2012) International evidence-based recommendations for point-of-care lung ultrasound. Intensive Care Med 38(4):577–591
Piette E, Daoust R, Denault A (2013) Basic concepts in the use of thoracic and lung ultrasound. Curr Opin Anaesthesiol 26:20–30
Saraogi A (2015) Lung ultrasound: present and future. Lung India 32(3):250–257
Del Colle A, Carpagnano GE, Feragalli B et al (2019) Transthoracic ultrasound sign in severe asthmatic patients: a lack of "gliding sign" mimic pneumothorax. BJR Case Rep 5:20190030
Slater A, Goodwin M, Anderson KE, Gleeson FV (2006) COPD can mimic the appearance of pneumothorax on thoracic ultrasound. Chest 129:545–550
Global Initiative for Asthma (GINA) (2019) Global Strategy for Asthma Management and Prevention. https://ginasthma.org/wp-content/uploads/2019/06/GINA-2019-main-report-June-2019-wms.pdf. Accessed 1 Feb 2019
Global Initiative for Chronic Obstructive Lung Disease (2019) Pocket Guide to COPD Diagnosis, management and Prevention. Available via https://goldcopd.org/wp-content/uploads/2018/11/GOLD-2019-POCKET-GUIDE-FINAL_WMS.pdf. Accessed 1 Feb 2019
Gorini M, Iandelli I, Misuri G et al (1999) Chest wall hyperinflation during acute bronchoconstriction in asthma. Am J Respir Crit Care Med 160(3):808–816
Lichtenstein D (2010) Whole body ultrasound in the critically ill. Springer, Heidelberg, pp 167–168
Health improvement Scotland (2019) BTS/SIGN British Guideline for the management of asthma. https://www.sign.ac.uk/assets/sign158.pdf. Accessed 5 Feb 2019
NICE NG80. (2017) Asthma: diagnosis, monitoring and chronic asthma management. Available via https://www.nice.org.uk/guidance/ng80 Accessed 1 February 2019
Martin TG, Elenbaas RM, Pingleton SH (1982) Failure of peak expiratory flow rate to predict hospital admission in acute asthma. Ann Emerg Med 11:466–470
American College of Emergency Physicians Policy Resource and Education Paper for ACEP Policy Statement (2019) Use of peak expiratory flow rate monitoring for the management of asthma in adults in the emergency department. https://www.acep.org/globalassets/new-pdfs/policy-statements/use-of-peak-expiratory-flow-rate-monitoring-for-the-mgmt.-of-asthma-in-adults-in-the-ed.pdf. Accessed 1 Feb 2019
No funding was obtained to this study.
Centro Hospitalar Universitário do Porto, Porto, Portugal
Sara Raquel Martins
Universitat de Lleida, Lleida, Spain
Ramon Nogué
SRPM contributed to data collection, literature review and was a major contributor in writing the manuscript. RN was responsible for data collection and critically revised the manuscript and contributed with important intellectual content. Both authors read and approved the final manuscript.
Correspondence to Sara Raquel Martins.
Consent for publication was obtained from the patient.
Martins, S.R., Nogué, R. Vertical displacement of pleura: a new method for bronchospasm evaluation?. Ultrasound J 12, 42 (2020). https://doi.org/10.1186/s13089-020-00184-5
Asthma/COPD
|
CommonCrawl
|
Relation between thermodynamic reversible process and reversible reaction
I know it seems to be a weird question. But for long I have been thinking whether there is any relation between thermodynamic reversible process and reversible reaction. Do they have any connection and if so how?
To complement other answers, I provided here definitions extracted from official IUPAC sources:
reversible process
A definition comes via the concept of entropy (Ref. 1):
Quantity the change in which is equal to the heat brought to the system in a reversible process at constant temperature divided by that temperature. Entropy is zero for an ideally ordered crystal at $\pu{0 K}$. In statistical thermodynamics $$S=k \ln W$$ where $k$ is the Boltzmann constant and $W$ the number of possible arrangements of the system.
The second sentence is a formulation of the third law, while the last is a definition of entropy viewed from the context of statistical thermodynamics. The first sentence describes the thermodynamic entropy. Coupled with the second law expressed as an inequality relation between heat and entropy,
$$\Delta S \ge \frac{q}{T} \tag{const. T}$$
it provides a functional definition of a reversible process (it is a process that maximizes $q$).
reversible reaction
The definition provided by IUPAC is related to microscopic reversibility:
In a reversible reaction, the mechanism in one direction is exactly the reverse of the mechanism in the other direction. This does not apply to reactions that begin with a photochemical excitation.
Such a definition implies thermodynamic reversibility.
IUPAC. Compendium of Chemical Terminology, 2nd ed. (the "Gold Book"). Compiled by A. D. McNaught and A. Wilkinson. Blackwell Scientific Publications, Oxford (1997). Online version (2019-) created by S. J. Chalk. ISBN 0-9678550-9-8. https://doi.org/10.1351/goldbook.
Buck Thorn♦Buck Thorn
According to Merriam Webster, a reversible reaction is:
a reaction that takes place in either direction according to conditions (as the formation of hydriodic acid by union of hydrogen and iodine or its decomposition into these elements)
For this type of reaction, you would use a double-harpoon to write the chemical equation:
$$\ce{H2(g) + I2(g) <=> 2HI(g)}$$
and would be able to write an equilibrium constant expression
$$K = \frac{[\ce{HI}]^2}{[\ce{H2}][\ce{I2}]}$$
with [] denoting activity or fugacity. If you start with pure reactants, the reaction would go forward (decrease in reactants, increase in products). If you start with pure products, the reaction would go backwards (increase in reactants, decrease in products). Once the reaction has reached equilibrium, concentrations would no longer change.
In any case, reactions in both directions would occur at the particular level. When the reaction is at equilibrium, you just won't know at the macroscopic level and would have to disturb the equilibrium to see macroscopic changes again.
Again according to Merriam Webster, a reversible process is:
an ideal process or series of changes of a system which is in complete equilibrium at each stage such that when the process is reversed each of the changes both internal and external is reversed but with the amount of transferred energy unaltered
Applied to chemical reactions, a reversible process is one where the Gibbs energy is constant throughout (or where the entropy of the universe is not increased by it). We would not expect any macroscopic changes (the process is at equilibrium), which why this is an ideal process (not real). The closest real process is one where the system is near equilibrium, and the increase in entropy of the universe is minimal.
What does IUPAC say?
The IUPAC Gold book makes reference to a paper (DOI: https://doi.org/10.1351/pac199466051077) offering a glossary of terms used in physical organic chemistry. In their definition of chemical equilibrium, it says:
While the definition of reversible process matches the one I cite above, and they use the same notation for the equilibrium reaction. However, the connection between reversible process and chemical equilibrium surprises me. When a reaction approaches equilibrium, the Gibbs energy does change, and it will not return to the initial state unless work is done on the system. Maybe the confusion regarding reversible process vs. reaction stems in part from this.
Karsten TheisKarsten Theis
I agree with both the MW and the IUPAC definitions, but perhaps you can help me checking if I am getting this right.
If a reactive system involving a reaction that is reversible in the chemical sense (i.e. it "goes both ways" around its equilibrium (i.e. the activities ( ~ concentrations) of its components can match the value of the equilibrium constant $\mathrm{K(T)}$ ), then as a whole there are changes within the system that satisfy $\Delta{S_{system}} = 0$ (i.e. reversible in a thermodynamic sense) because the reversible reactions within have molar changes of entropy + $\Delta S$ (for the forward reaction, typically non-zero) and $- \Delta S$ (for the backward reactions, with a minus sign in front), with forward and backward reactions at the same rate (dynamic equilibrium).
In a previous reply, I attempted the following wording, which perhaps is more detailed but does not make explicit the distinction between entropy of reaction and entropy of the system:
With $\Delta G = \Delta H - T\Delta S$ for a reversible reaction, where $\Delta G$ is the change in Gibbs energy, $\Delta H$ the enthalpy of reaction and $\Delta S$ the change in entropy (and $T$ and $P$ constant), I think what is going on then for a system where a dynamic equilibrium is established between backward and forward reaction, which could qualify as a reversible process, once the equilibrium is reached. $T$ sets the value of the equilibrium constant and hence $\Delta G$ for these conditions. $\Delta H$ and $\Delta S$ typically are considered to be fairly constant (you could see how $\Delta S$ may in fact be dependent on temperature though, just through different vibration energies of different bonds).
In the special case where $\Delta G = 0$ (when equilibrium constant $K = 1$), with $\Delta G = \Delta H - T\Delta S$, one has to assume that the $\Delta S$ is fully reversible, which I think assumes faultless conversion of $T\Delta S$ to $\Delta H$ and vice-versa, between reacting species.
Dimitri MignardDimitri Mignard
Causes of reversible reaction
Relationship between thermodynamic reversibility and reactions
Changes in entropy of the universe in a reversible reaction
Whats the difference between reversible process in thermodynamics and reversible chemical reactions?
Difference between reversible and irreversible thermodynamic process
What is the cause for thermodynamic reversible and irreversible process?
Calculate Work Done for Reversible and Irreversible Adiabatic process
When is a reaction reversible?
Calculating entropy change: reversible vs irreversible process
Current and reversible heat in battery reaction
Isn't the change in entropy zero for a reversible process?
|
CommonCrawl
|
Announcement (en)
Tutorial (en)
A. Maximum GCD
Let's consider all integers in the range from $$$1$$$ to $$$n$$$ (inclusive).
Among all pairs of distinct integers in this range, find the maximum possible greatest common divisor of integers in pair. Formally, find the maximum value of $$$\mathrm{gcd}(a, b)$$$, where $$$1 \leq a < b \leq n$$$.
The greatest common divisor, $$$\mathrm{gcd}(a, b)$$$, of two positive integers $$$a$$$ and $$$b$$$ is the biggest integer that is a divisor of both $$$a$$$ and $$$b$$$.
The first line contains a single integer $$$t$$$ ($$$1 \leq t \leq 100$$$) — the number of test cases. The description of the test cases follows.
The only line of each test case contains a single integer $$$n$$$ ($$$2 \leq n \leq 10^6$$$).
For each test case, output the maximum value of $$$\mathrm{gcd}(a, b)$$$ among all $$$1 \leq a < b \leq n$$$.
In the first test case, $$$\mathrm{gcd}(1, 2) = \mathrm{gcd}(2, 3) = \mathrm{gcd}(1, 3) = 1$$$.
In the second test case, $$$2$$$ is the maximum possible value, corresponding to $$$\mathrm{gcd}(2, 4)$$$.
|
CommonCrawl
|
What Makes a Published Result Believable?
This article discusses the validity of scientific results.
June 4, 2019 ArticleEpistemology, Probability, StatisticsLê Nguyên Hoang 460 views
OMG, it's been such a while! As I am writing these lines, I feel that I've gone back in time! Much has happened since 2015. Mostly, I've gone French and YouTube, arguably quite successfully. Also, last year, I published my first book, on Bayesianism, a topic I have become very fond of. The book is still only in French, but the English version should hopefully be coming out soon. Meanwhile, here's a post about an important application of Bayes rule which, weirdly enough, didn't make it into my book.
I believe that what we're going to discuss here is a relatively neglected aspect of science, even though it seems absolutely critical. Namely, we're going to discuss the reliability of its results. This may seem wrong to question it these days, as a lot of pseudosciences seem to be gaining in popularity. But actually, I'd argue that the rise of pseudosciences is all the more a reason why we should really do our best to better understand what makes science valid, or not. In fact, many scholars have raised huge concerns about the validity of science before me, as wonderfully summed up by this brilliant Veritasium video:
Many scientists even acknowledge that science is undergoing a replication crisis, which led to many, perhaps even most, scientifically published results being wrong. Here, we'll aim at clarifying the reasons of this replication crisis, and at underlining what ought to be done to solve it.
In fact, I first thought that I should write something very serious about it, like for research journal or something. But this quickly frustrated me. This is because the simpler, though very approximate, explanations seemed actually much more pedagogically useful to me. In this article, I'll present a very basic toy model of publication, which relies on lots of unjustified assumptions. But arguably, it still allows to pinpoint some of the critical points to keep in mind when judging the validity of scientifically published result.
The fundamental equation (of a toy model)
The scientific process is extremely complicated and convoluted. After all, we're dealing here with the frontier of knowledge. A lot of expertise is involved, peer review plays a big role, as well as journals' politics, trends within science and so on. Modeling it all would be a nightmare! It would basically boil down to doing all of science better than scientists themselves. Understanding how science really works is extremely hard and complicated.
This is why we've got to simplify the scientific process to have a chance to say something nontrivial about it. Even though caveats will apply. This is what I'll propose here, with a very basic toy model of publication. We consider three events in the study of a theory. First, we assume that the theory may be true, denoted $T$, or not, which we denote $\neg T$. Second, we assume that the theory is tested, typically through some statistical test. This test may yield a statistically significant signal $SS$. We shall only consider the case where statistically significant signals are signals of rejections of the theory. Third and finally, we assume that this will lead to a published result $PR$ or not.
Moreover, for simplicity, we assume that only statistically significant signals get published. Thus, only two chain of events lead to publication, namely $T \rightarrow SS \rightarrow PR$, or $\neg T \rightarrow SS \rightarrow PR$. Since published results are rejections of the theory, the validity of the published results corresponds to the probability of theory $\neg T$, given that the results have been published. In other words, the validity is measured by $\mathbb P[\neg T|PR]$. Published results are highly believable if $\mathbb P[\neg T|PR]$ is large.
To simplify computation, we shall rather study the odds of $\neg T$ given publication, i.e. the quantity $validity = \mathbb P[\neg T|PR] / \mathbb P[T|PR]$. Bayes rule, combined with our previous assumptions, then yields:
$$validity = \frac{\mathbb P[PR|\neg T]}{\mathbb P[PR|T]} \frac{\mathbb P[\neg T]}{\mathbb P[T]} = \frac{\mathbb P[PR|\neg T,SS]}{\mathbb P[PR|T,SS]} \frac{\mathbb P[SS|\neg T]}{\mathbb P[SS|T]} \frac{\mathbb P[\neg T]}{\mathbb P[T]}.$$
This equation is really the heart of this article. But it may be hard to understand if you are not familiar with statistical tests and conditional probabilities. So let's replace the weird and complicated mathematical notations by words that roughly convey their meaning.
Namely, we define $prior = \mathbb P[\neg T] / \mathbb P[T]$, $power = \mathbb P[SS|\neg T]$, $threshold = \mathbb P[SS|T]$, $clickbait = \mathbb P[PR|T,SS] / \mathbb P[PR|\neg T,SS]$. With these new quantities, we finally obtain the equation of the validity of science, according to our toy model of publication:
$$validity = \frac{power \cdot prior}{threshold \cdot clickbait}.$$
Now, the rest of this article will be a discussion of this equation. In particular, we shall discuss the validity of the terminology, the likely values of the terms (depending on fields) and what ought to be done to increase this quantity (and whether that really is a desirable goal).
Classical statistics terms
Let's start with the easiest notion, which is that of $threshold$. The threshold discussed here will typically be that of the p-value statistical test. Under standard assumptions, it can easily be shown that the threshold is the probability $\mathbb P[SS|T]$ of a statistically significant signal assuming $T$ true. In many areas of science, it is chosen to be 5%. Some scientists argue that it should be brought down to 1%. This would be a great way to increase the validity of science indeed. All else being equal (which is definitely unrealistic though), this would multiply by 5 the validity of science!
A very important caveat though is that there may be reasons why $\mathbb P[SS|T]$ may sometimes get larger than the threshold of the statistical test. Namely, there may be imperfections in measurement devices that have gone unnoticed. This increases the likelihood of statistical significant signal despite the theory being true. In fact, it may be much worse because of additional biases, as explained in this paper by Professor John Ioannidis. These biases may be typically large when financial or publishing incentives for researchers are huge and lead to distortion or misconduct. Also, such biases may result from a flawed application of statistical tools. Because of all of this, dividing the p-value threshold by 5 will probably actually not divide the value of $threshold$ by 5. Depending on fields, this quantity might for instance remain at 10%.
Another well-known term in this equation is the $power$ of the statistical test. Unfortunately, this is also a trickier notion to pinpoint, especially in the setting we define. Recall that it is equal to the probability $\mathbb P[SS|\neg T]$ of a statistically significant signal, assuming $T$ false. The problem is that $\neg T$ is often not much a predictive theory. It will typically roughly correspond to some proposition of the form $x \neq 0$, where $x$ is a parameter of a model.
For this reason, in some studies, the negation $\neg T$ of the theory to be tested is replaced by some alternative hypothesis. Let's call it $A$. The trouble is that if $A$ is very close to $T$, typically if it assumes $x = \epsilon$, then $A$ will actually make mostly the same predictions as $T$. In particular, it will be both difficult and not very useful to reject $T$, since it would mean replacing it with a nearly identical theory.
In fact, instead of a single alternative $A$, a more Bayesian approach would usually consider a infinite number of alternatives $A$ to theory $T$. In such cases, it is arguably quite clear that $T$ is not that likely, since it is in competition with infinitely many other alternatives! Moreover, to structure fairly the competition, instead of assuming that all alternatives are on equal foot, Bayesianism will usually rather add a prior distribution over all alternatives, and will rather puzzle over how this prior distribution is changed by the unveiling of data. Evidently, much more needs to be said on this fascinating topic. Don't worry. The English version of my book on this topic is forthcoming…
Now, it might be relevant to choose $A$ to be the most credible alternative to $T$, or something like this. This is what has been proposed by Professor Valen Johnson. Using an alternative theory somewhat based on this principle, and numerous disputable simplifying assumptions, Johnson estimated that, for a p-value threshold at 5% and p-values of the order of 1 to 5%, the ratio $power/threshold$ was somewhere between 4 and 6 for well-designed statistical tests. For a p-value threshold at 1%, this quantity would be around 14.
However, given our previous caveats about the value of $threshold$, it seems that we should rather regard the ratio $power/threshold$ to be around, say 3, for a 5% threshold, and around, say 7 for 1% threshold. These quantities are extremely approximate estimates. And of course, they should not be taken as is.
The prior
We can now move on to the most hated term of classical statistics, namely the concept of $prior$. Note that a Bayesian prior is usually defined as $\mathbb P[\neg T]$, not as the prior odds $prior = \mathbb P[\neg T]/\mathbb P[T]$. Please apologize this abuse of terminology whose purpose was to simplify as much as possible the understanding of the fundamental equation of this article. For the purpose of this article (and this article only!), $prior$ is thus a real-valued number between 0 and $\infty$.
But what does it represent? Well, as the equation $prior = \mathbb P[\neg T]/\mathbb P[T]$ says it, $prior$ quantifies how much more likely $\neg T$ seemed compared to $T$ before collecting any data. The larger $prior$ is, the more $\neg T$ seemed to be believable prior to the study.
Now, if you're not a Bayesian, it may be tempting to argue that there's no such thing as a prior. Or that science should be performed without prejudice. There may be a problem with the connotation of words here. In fact, prior is a synonym (with opposite connotation) of "current state of knowledge". And it seems irrational to analyze data without taking into account our "current state of knowledge". In fact, many statisticians, for instance here, argue that the current problem of statistical tests is that they don't sufficiently rely on our current understanding of the world. Statistical analyzes, they argue, must be contextualized.
To give a clear example where prior-less reasoning seems to fail, let's look at parapsychology which may test something like precognition. The reason why parapsychology papers do not seem to yield reliable results usually has nothing to do with the method they apply. Indeed, they usually apply the supposedly "gold standard" of science, namely double-blind randomized controlled trials with a p-value statistical test. You cannot blame them for their "scientific method".
The reason why, despite the rigor of their method, parapsychologists nevertheless obtain results that do not seem trustworthy is because of the prior. In fact, parapsychologists mostly test very likely theories. In other words, they choose to test theories whose $prior$ is near-zero. Yet, if the $prior$ is near-zero, according to our fundamental equation, then the validity of the published result will be near-zero too.
It is worth putting this in perspective, including with respect to the value of $threshold$. In particle physics, this $threshold$ is extremely low. It is at around $10^{-7}$ (and if there's no experimental error, this would correspond to $power/threshold \approx 10^5$. This sounds like this should be sufficient. Well, not necessarily. Given that Bayes rule is multiplicative, if a theory is really wrong, it will become exponentially unbelievable with respect to the amount of collected data. In other words, by collecting, say, thousands of data points, one could get to a credence of $10^{-100}$ for a given very wrong theory. Put differently, the scales of the term $prior$ may be very different from those of $threshold$ or $power$. This is why $prior$ arguably plays a much bigger role in judging the validity of scientifically published results than $threshold$.
Adding to this the very nonnegligible probability of experimental error, in which case $power/threshold$ may actually be of the order of 100, this explains why the discovery of faster-than-light neutrinos was promptly rejected by physicists.
In particular, in such a case, the quantity $power \cdot prior / threshold$ may actually be smaller than 1. In fact, in cases where a threshold of 1% is used, it suffices that the the prior odds of $\neg T$ was 1 to 10, which arguably may not be absurd in, say, clinical tests of drugs, for $power \cdot prior / threshold$ to be much smaller than 1. This would mean that despite publication arguing for $\neg T$, the theory $T$ is still more credible than $\neg T$. Or put differently, the publication is more likely to be wrong.
So, you may ask, why on earth would we try to reject theories $T$ with large priors? Isn't it the root cause of most publications being false? Yes it is. But this doesn't seem to be a bug of science. It actually seems to a feature of science. Typically, scientists are often said to have to test theories over and over, even though such theories have been supported so far by a lot of evidence. This means that they will have to test likely theories, which means that the $prior$ of rejection is very small.
There is a more practical reason why scientists may want to reject theories that seem very likely prior to analysis: these are the results that are often the most prestigious. Such results, if they hold, will be much more cited and celebrated. And of course, this gets even worse when scientists have motives, such as clearing the use of some drug. This can be heavily aggravated by p-hacking, which can be regarded as the scaling or automation of analysis of true theories. This scaling or automation may hugely increase the number of true theories to be tested, with the guarantee that statistical significant results will eventually be found. And this leads to the final problematic piece of the puzzle: clickbait.
So let's discuss the $clickbait$ term, which may also be called the publication bias. Recall that $clickbait = \mathbb P[PR|T,SS]/\mathbb P[PR|\neg T,SS]$. In other words, it is about how much more likely it is to publish the rejection of a true theory $T$, as opposed to that of false theory $T$, given the same amount of statistical significance.
It may seem that the $clickbait$ term should equal 1. After all, from far away, it may seem that the publication peer-review process is mostly about judging the validity of the analysis. Indeed, it is often said that peer review mostly aims at validating results. But that seems very far from how peer review processes actually work. At least from my experience. And from the experience of essentially all scientists I know.
In fact, I'd argue that much of peer review has nearly nothing to do with validating results. After all, quite often, reviewers do not bother to ask for the raw data, nor will they redo the computations. Instead, I'd argue that peer review is mostly about judging whether a given paper is worthy of publication in this or that journal. Peer review is about grading adequately the importance or the value of such or such findings.
This is why, at peer review, not all theories are treated equally, even when they have the same statistical significance. In particular, the rejection of a theory that really seemed true is much more clickbaity than the rejection of a theory that did not seem true. This is why $clickbait$ is probably larger than 1, if not much, much, much larger than 1.
Note that this will be particularly the case for journals that receive a large number of submissions. Indeed, the more submissions there are, the more there room there is for publication bias. This arguably partly explains why Nature and Science are particularly prone to the replication crisis and to paper retractions.
The focus on p-value seem to be aggravating the clickbait problem. Indeed, by removing other considerations that could have favored the rejection of false theories rather than that of true theories, p-values may be serving as an excuse to validate any paper with a statistical significant signal, and to then move on the discussion towards the clickbaitness of the papers whose p-value was publishable.
Unfortunately, it seems that the scales of clickbaitness may be at least comparable to that of the p-value threhold. Evidently, this probably strongly depends on the domain and the journal that we consider. Combining it all, given that I already essentially argued that $power \cdot prior / threshold$ might already be smaller than 1, we should expect $validity = (power \cdot prior) / (threshold \cdot clickbait)$ to actually be much smaller than 1. This is why, arguably, especially if the p-value is used as a central threshold for publication, we really should expect the overwhelming scientifically published results to be wrong. This is not a flattering news for science.
Should published results be more valid?
It may seem that I am criticizing scientists for preferring the test of very likely theories, and journals for favoring clickbait papers. But this is not nearly the point of this article. What does seem to be suggested by our toy model of publication though, is that these phenomena undermine the validity of scientifically published results. It seems unfortunate to me that these important features of science are not sufficiently underlined by the scientific community.
Having said this, I would actually argue that science should not aim at the validity of scientifically published results. In fact, I believe that it is a great thing that scientists still investigate theories that really seem to hold, and that journals favor the publications of really surprising results. Indeed, science research is arguably not about establishing truths. It seems rather to be about advancing the frontier of knowledge. This frontier is full of uncertainty. And it is very much the case that, in the shroud of mystery, steps that seem forward may eventually turn out to be backwards.
In particular, from an information theoretic point of view, what we should aim at is not the validity of what we say, but, rather, some high-information content. Importantly, high-information does not mean true. Rather, it means information that is likely to greatly change what we thought we knew.
This discussion makes more sense in a Bayesian framework. High-information contents would typically be those that justifiably greatly modify our prior beliefs, when Bayes rule is applied. Typically, a study that shows that we greatly neglect the risk of nuclear war may be worth putting forward, not because the nuclear war is argued to be very likely by the study, but perhaps because it argues that whoever gave a $10^{-15}$ probability of nuclear war in the 21st century would have to update this belief, given the study, to, say $10^{-3}$. Such an individual would still find the study "unreliable". Yet, the study would still have been of high-information, since it would have upset the probability by 12 orders of magnitude!
Still more precisely, and still in a Bayesian framework, the question of which research ought to undertaken, published and publicized would correspond to information that becomes useful to update strategies to achieve some goal, say, improving the state of the world. In other words, in this framework, roughly speaking, a published research $PR$ is worth publishing if
$$\max_{action} \mathbb E[world|action,PR] \gg \max_{action} \mathbb E[world|action].$$
The more general approach to goal-directed research publication (and, more generally, to what ought to be communicated) is better captured by the still more general framework of reinforcement learning, and especially of the AIXI framework. Arfff… There's so much more to say!
On another note, it seems unfortunate to me that there is not a clearer line between scientific investigations and scientific consensus. In particular, both seem to deserve to be communicated, which may be done through publications. But it may be worth distinguishing interesting findings from what scientists actually believe about such or such topic. And importantly, scientific investigations perhaps shouldn't aim at truths. It perhaps should aim at highlighting results that greatly upset readers' beliefs — even if they don't actually convince them.
Wow that was long! I didn't think I would reach the usual length of Science4All articles. It does feel good to be back, though I will probably not really be back. You probably shouldn't expect another new article in the next 3 years!
Just to sum up, the validity of a scientifically published result does depend on a few classical statistics notions, like the threshold of p-value tests and statistical power. And arguably, yes, it'd probably be better to lower the p-value threshold, even to, say, 0.1% as proposed by Valen Johnson. However, this does not seem sufficient at all. It seems also important to be watchful of the publication bias, also known as the clickbaitness of the paper. Perhaps most importantly, one should pay attention to the prior credence in theory to be tested. Unfortunately, this is hard to do, as it usually requires a lot of expertise. Also, it's very controversial because this would make science subjective, which is something that some scientists abhor.
On this note, I'd like to quote one of my favorite scientist of all times, the great Ray Solomonoff: "Subjectivity in science has usually been regarded as Evil — that it is something that does not occur in \true science" — that if it does occur, the results are not "science" at all. The great statistician, R. A. Fisher, was of this opinion. He wanted to make statistics "a true science" free of the subjectivity that had been so much a part of its history. I feel that Fisher was seriously wrong in this matter, and that his work in this area has profoundly damaged the understanding of statistics in the scientific community — damage from which it is recovering all too slowly."
Unfortunately, overall, it is far from clear how much we should trust a scientifically published result. This is very context-dependent. It depends on the theory, the area of research, the peer review process, the policy of the journal and the current state of knowledge. Science is complex. Nevertheless, the lack of validity of scientifically published results need not undermine the validity of the scientific consensus. In fact, in my book, I argue through a Bayesian argument that, in many contexts, the scientific consensus should be given a much greater credence than any scientific publication, and than any scientist. Indeed, especially if this consensus is fed with a lot of data that are likely to change the scientists' priors, it can be argued that the opinion of the scientific community gets updated in a similar manner as the way Bayes rule requires priors to be updated (this is also related to things like multiplicative weights update or the Lotka-Volterra equations).
But more on that in a book whose English version is forthcoming…
|
CommonCrawl
|
publisher correction
Publisher Correction | Published: 13 June 2019
Publisher Correction: A primary radiation standard based on quantum nonlinear optics
Samuel Lemieux ORCID: orcid.org/0000-0002-1773-87951,
Enno Giese ORCID: orcid.org/0000-0002-1126-63521,6 nAff6,
Robert Fickler1,7 nAff7,
Maria V. Chekhova2,3,4 &
Robert W. Boyd1,5
Nature Physics (2019) | Download Citation
Optical metrology
Quantum optics
The original article was published on 04 March 2019
Correction to: Nature Physics https://doi.org/10.1038/s41567-019-0447-2, published online 4 March 2019.
In the version of this Letter originally published, in equation (3) a superscript '2' was mistakenly placed inside the square root sign; instead it should have been outside as shown below:
$${\cal{N}} = \left( {c^{ - 1}\;L\;\chi ^{\left( 2 \right)}E_{\mathrm{p}}} \right)^2{\sqrt {\omega \omega _{\mathrm{i}}{\mathrm{/}}\left( {nn_{\mathrm{i}}} \right)}}^2{\mathrm{sinc}}^2\left( {{\mathrm{\Delta }}\kappa L{\mathrm{/}}2} \right)$$
This error has now been corrected in the online versions.
Enno Giese
Present address: Institut für Quantenphysik and Center for Integrated Quantum Science and Technology, Universität Ulm, Ulm, Germany
Robert Fickler
Present address: Photonics Laboratory, Physics Unit, Tampere University, Tampere, Finland
Department of Physics, University of Ottawa, Ottawa, Ontario, Canada
Samuel Lemieux
, Enno Giese
, Robert Fickler
& Robert W. Boyd
Max Planck Institute for the Science of Light, Erlangen, Germany
Maria V. Chekhova
Physics Department, Lomonosov Moscow State University, Moscow, Russia
University of Erlangen-Nuremberg, Erlangen, Germany
Institute of Optics, University of Rochester, Rochester, NY, USA
Robert W. Boyd
Search for Samuel Lemieux in:
Search for Enno Giese in:
Search for Robert Fickler in:
Search for Maria V. Chekhova in:
Search for Robert W. Boyd in:
Correspondence to Samuel Lemieux.
Nature Physics menu
|
CommonCrawl
|
Genome-wide association mapping of iron homeostasis in the maize association population
Andreas Benke1,
Claude Urbany1 &
Benjamin Stich1
Iron (Fe) deficiency in plants is the result of low Fe soil availability affecting 30% of cultivated soils worldwide. To improve our understanding on Fe-efficiency this study aimed to (i) evaluate the influence of two different Fe regimes on morphological and physiological trait formation, (ii) identify polymorphisms statistically associated with morphological and physiological traits, and (iii) dissect the correlation between morphological and physiological traits using an association mapping population.
The fine-mapping analyses on quantitative trait loci (QTL) confidence intervals of the intermated B73 × Mo17 (IBM) population provided a total of 13 and 2 single nucleotide polymorphisms (SNPs) under limited and adequate Fe regimes, respectively, which were significantly (FDR = 0.05) associated with cytochrome P450 94A1, invertase beta-fructofuranosidase insoluble isoenzyme 6, and a low-temperature-induced 65 kDa protein. The genome-wide association (GWA) analyses under limited and adequate Fe regimes provided in total 18 and 17 significant SNPs, respectively.
Significantly associated SNPs on a genome-wide level under both Fe regimes for the traits leaf necrosis (NEC), root weight (RW), shoot dry weight (SDW), water (H 2O), and SPAD value of leaf 3 (SP3) were located in genes or recognition sites of transcriptional regulators, which indicates a direct impact on the phenotype. SNPs which were significantly associated on a genome-wide level under both Fe regimes with the traits NEC, RW, SDW, H 2O, and SP3 might be attractive targets for marker assisted selection as well as interesting objects for future functional analyses.
Iron (Fe) deficiency in plants is the result of a low Fe availability which might be induced by lime-chlorosis that affects 30% of cultivated soils worldwide [1]. As an adaptation to the sparingly available Fe, plants evolved two different strategies to mobilize and uptake Fe [2]. Dicotyledonous and non graminaceous plant species acquire Fe by the so-called strategy I mechanism [3]. The characteristic of this strategy is the release of protons into the rhizosphere that facilitate the mobilization and subsequent reduction of Fe(III) to Fe(II) via a plasma membrane bound Fe(III) chelate reductase [4]. The soluble Fe(II) is finally taken up by the iron regulated transporter 1 (IRT1) [5].
For the crop plants which are graminaceous plant species such as barley, rice, and maize, Fe is acquired using the so-called strategy II [6]. Characteristic for this strategy is the release of non proteinogenic compounds named phytosiderophores. These compounds chelate the Fe(III) in the rhizosphere. Phyto-siderophore-Fe(III) complexes are transported by the specific transporter yellow stripe 1 (YS1) into the plant [7]. It was shown by [2] that the amount of exudated phytosiderophores is crucial for a chlorosis tolerance and therefore, Fe-efficient plant. However, for an Fe-efficient genotype, the balance of Fe dependent systems like Fe mobilization and uptake into the plant and the homeostasis related mechanisms like translocation and regulation of the Fe level in the cell to avoid shortage or toxicity [8,9] is essential.
To improve our understanding of the mechanisms which are responsible for Fe-efficiency in maize, two different methods have been applied so far. The RNA-Sequencing approach used by [10] focused on genes which were differentially expressed between the Fe-efficient and inefficient inbred lines under sufficient and deficient Fe regimes. This study provided a tremendous amount of putative candidate genes for Fe-efficiency. The same inbred lines were used for the establishment of the intermated B73 × Mo17 (IBM) segregating population [11]. Benke et al., 2014 [12] observed a considerable phenotypic variation for Fe-efficiency in this population which was used to map quantitative trait loci (QTL). An alternative to linkage mapping is association mapping which has the potential to provide a higher mapping resolution as well as allows the evaluation of a higher number of alleles at a time. To our knowledge, no genome-wide association study has been conducted to dissect Fe-efficiency in maize.
The objectives of our study were to (i) evaluate the influence of different Fe regimes on morphological and physiological trait formation, (ii) identify polymorphisms statistically associated with morphological and physiological traits, and (iii) dissect the correlation between morphological and physiological traits using an association mapping population.
The repeatability (H 2) of the examined traits ranged for the whole set of phenotyped inbred lines from 0.53 (H 2O) to 0.72 (SP3, SP4, and RL) under the Fe-deficient regime (Table 1). H 2 of the traits evaluated under the Fe-sufficient regime varied between 0.47 (H 2O) and 0.87 (SP4).
Table 1 Traits recorded in the current study for two deficient and sufficient iron (Fe) regimes, where H 2 is the repeatability on an entry means basis for the association mapping population
The adjusted entry means (AEM) were calculated for all physiological and morphological traits under consideration of the block effects for each Fe regime (Figure 1). No variation was observed for BTR under the Fe-sufficient regime. For NEC, no significant (α= 0.05) difference between both Fe regimes was found. The remaining morphological and physiological traits except H 2O showed a significant (α= 0.05) lower trait value under the Fe-deficient regime in comparison to the Fe-sufficient regime. For H 2O the opposite trend was observed.
Boxplot of the adjusted entry means for the association mapping population of 267 maize inbred lines evaluated at Fe-deficient and Fe-sufficient regimes represented in white and gray, respectively. T-test was applied to examine the difference of a trait between both Fe conditions. ***: P = 0.05, 0.01, and 0.001, respectively; ns, not significant.
The lowest pairwise correlation coefficient was with r = 0.17 observed between H 2O and LAT under the Fe-deficient regime (Figure 2). By comparison, for the Fe-sufficient regime, the higher positive correlation coefficient was observed between SDW/SL and SDW (r = 0.96) and the lowest between RL and RW (r = 0.23).
Pairwise correlation coefficients calculated between all pairs of traits collected for the association mapping population. The values above the diagonal represent the correlation coefficients between the adjusted entry means (AEM) of the Fe-deficient regime. The values below the diagonal represent the correlation coefficients between the AEM of the Fe-sufficient regime.
In the ASMP, the population structure explained on average 2.02% of the phenotypic variation with a minimum of 0.08% (SL) and a maximum of 5.32% (RL) under the Fe-deficient regime (Additional file 1: Table S1). Under the Fe-sufficient regime, the population structure accounted on average for 2.42% of the phenotypic variation ranging from 0.35% (SDW) to 5.09% (RL).
The QTL fine-mapping (FM) analyses resulted in total in 13 significant (FDR = 0.05) SNPs detected in QTL confidence intervals of the IBM population where NEC QTL1 comprised the highest amount (4) under the Fe-deficient regime (Table 2, Figure 3). The highest proportion of phenotypic variance was explained by a SNP in QTL3 of RW (8.47%). The maximum proportion of phenotypic variance explained in a simultaneous fit by all SNPs in a QTL confidence interval was 11.45% (QTL8 SP3) and the minimum was 0.39% (QTL4 RW).
Summary of significant (FDR = 0.05) single nucleotide polymorphisms (SNPs) detected in confidence intervals of quantitative trait loci (QTL) (red) of [ 12 ] and genome-wide SNPs association analyses (blue) using the association mapping population with respect to the iron (Fe) regime 10 μ M and 300 μ M.
Table 2 Single nucleotide polymorphism (SNP) markers significantly (FDR = 0.05) associated in the association mapping population which were located within confidence intervals of QTL detected for the same trait in the IBM population [ 12 ]
Under the Fe-sufficient regime, the QTL FM analyses revealed in total two significant (FDR = 0.05) SNPs for SP4 QTL1 (Table 2, Figure 3). The maximum proportion of phenotypic variance of SNPs was 6.32%. The phenotypic proportion was 10.31% for both SNPs in a simultaneous fit.
The genome-wide association (GWA) analyses of the traits examined in the Fe-deficient regime provided in total 18 significant SNPs (FDR = 0.05) where NEC showed with 12 SNPs the highest number (Table 3, Figure 3, Additional file 2: Figure S1;A, Additional file 3: Figure S3;A). The proportion of phenotypic variance explained by a SNP showed for RL (18.81%) the highest value. The proportion of phenotypic variance explained in a simultaneous fit by all SNPs for one trait was maximal for RW (34.65%) and minimal for SDW (13.01%).
Table 3 Single nucleotide polymorphism (SNP) markers significantly (FDR = 0.05) associated with traits evaluated under Fe-deficient and the Fe-sufficient iron regime
The GWA analyses under the Fe-sufficient regime revealed in total 17 significant (FDR = 0.05) SNPs where H 2O (9) included the highest number (Table 3, Figure 3, Additional file 4: Figure S2;A, Additional file 5: Figure S4; A). The proportion of the explained phenotypic variance was highest for H 2O (21.21%). In a simultaneous fit of all significant (FDR = 0.05) SNPs, proportion of the phenotypic variance maximally explained was 57.47% (H 2O) and the minimum was 10.99% (SP3).
Under consideration of the global extent of LD, 18 and 9 unique genes were linked to the significantly (FDR = 0.05) associated SNPs under the Fe-deficient and Fe-sufficient regime, respectively (Tables 2 and 3). None of the Sanger-sequenced genes evaluated in Additional file 2: Figure S1 included SNPs that were significantly (FDR = 0.05) associated with the morphological and physiological traits.
Environmental factors such as pH variation in the soil, temperature, water stress, and mineral concentration effects have a strong influence on Fe availability for plants [2]. To reveal genotypic effects that contribute to Fe-efficiency and avoid an overlap with other mineral nutrients, hydroponic culture has been proven to be the method of choice providing standard environmental conditions [13]. Such a culture has been used in our study to examine the Fe-efficiency in a broad germplasm set of maize.
Dissection of phenotypic diversity and relation between the examined traits
We observed for all traits moderate to high repeatabilities under both Fe regimes (Table 1). This finding indicated that the genetic contribution to variation was minimally covered by experimental variation of hydroponics which in turn increases the power of the genetic dissection of Fe-efficiency by association mapping methods.
We observed, under the Fe-deficient regime, variation for the trait BTR (Figure 1). Long et al. 2010 [14] revealed an Fe sensing gene named POPEYE in Arabidopsis roots during Fe-deficiency. Their finding indicated that Fe deficiency sensing mechanisms regulate terminal root branching. However, in contrast to Arabidopsis [14], in maize the mechanism of root branching under Fe-deficiency is not yet understood.
The whole set of traits evaluated in one Fe regime showed mostly moderate to high pairwise correlations (Figure 2). This finding suggests that for each of the Fe-sufficient and Fe-deficient regimes most of the examined traits have a joint regulation. One of the few exception was the correlation between leaf necrosis and water content, which was only observed in the Fe-sufficient regime. This positive correlation might be caused by a nutrient distortion, also known as concentration effect [2].
Marker-phenotype associations for QTL confidence intervals and on genome-wide scale
Using the ASMP we were able to validate 13% and 3% of detected QTLs from our former study [12] for Fe-deficient and Fe-sufficient regimes, respectively. Among the SNPs that were located within QTL confindence intervals [12], we identified a SNP (S1_28765627) in the cytochrome P450 94A1 (CYP94A1) (GRMZM2G036257) gene that was significantly associated with NEC (Table 2). CYP94A1 is responsible for modifying lipophilic compounds like fatty acids [15]. Its involvement in plant development, repair, and defense [15] might indicate the contribution of stress response mechanisms during Fe-deficiency. Furthermore, cytochrome P450 family proteins might also play a role in Fe sensing [16] as Fe is incorporated into a heme group of the cytochrome P450 proteins [17].
We observed under the Fe-deficient regime several genes to be associated with NEC (Figure 4) and RW that are mechanistically involved in regulation of stress response (Table 3). A subset of these genes includes the invertase beta-fructofuranosidase insoluble isoenzyme 6 (NEC,GRMZM2G018692) [18], low-temperature-induced 65 kDa protein (NEC,GRMZM2G376743) [19], and the late embryogenesis abundant protein 4-5 (SP3,GRMZM2-G177084) [20]. O'Rourke et al., 2007 [21] showed that these genes are responsible for the universal stress response caused by Fe-deficiency, although they do not bind or incorporate Fe in their protein structure. This suggested that these genes are important to maintain the viability of the plant due to stress prevention caused by Fe-deficiency. Furthermore, significant associations for NEC might indicate that this trait is genetically less complex than Fe-chlorosis as for the SPAD value related traits no significant association could have been detected under the Fe-deficient regime.
Genome-wide P values for association analysis of NEC under the Fe-deficient regime using 267 maize inbred lines of the association mapping population. The horizontal line corresponds to a nominal significance threshold of 5% considering the Benjamini Hochberg correction for multiple testing.
We did not observe a clear clustering of genotypes with high NEC values in the individual subgroups. Furthermore, when examing the subgroups individually (Additional file 1: Table S1), we detected no significant associations neither for NEC nor for RW under both Fe regimes (data not shown). Additionally, excluding genotypes with a higher NEC susceptibility from the association analysis changed the results only marginally compared to the analyses with all genotypes. These results suggested that the concentration effect does not influence the conclusions of our study.
Despite the variation observed for BTR under the Fe-deficient regime, no significant associations have been detected. Therefore, further research is required on the genetics of BTR. In that context, the genes identified in our companion study [10] using an RNA sequencing approach can be promising starting points.
In our study, genes, known being mechanistically involved in strategy II related processes for Fe mobilization, uptake and storage, were resequenced (Additional file 6: Table S2). For polymorphisms in these genes, no significant associations were detected for both Fe regimes. This finding could be explained by a correlation of allele frequency of the mechanistically involved genes and population structure as was observed previously for flowering time and Dwarf8 [22,23]. As we did not observe a strong correlation between population structure and phenotypic variation of the studied traits this explanation is not likely to be true (Additional file 1: Table S1). The reason could be that these mechanistically involved genes have been identified by mutant screening only and that natural genetic variation at these genes leads to evolutionary disadvantages. Therefore, only neutral polymorphisms with respect to the phenotype are observed in the maize ASMP. This might reflect purified selection of these adaptive genes that does not contribute to phenotypic variation of quantitative trait [24].
An overlap between associated SNPs of traits were not observed putatively due to minor effect associations and a stringent significance thresholds applied in our study. Nevertheless, significant association of SNPs and their corresponding genes as described above provide an insight in the genetic architecture of biological processes characteristic for each trait that is in a direct relation to Fe-homeostasis. However, association mapping analyses provide only an indirect statistical evidence for a contribution of the considered allele to phenotypic variation [25] a direct functional validation is indispensable. Furthermore, additional traits like protein and transcriptome expression profiling could be performed on the association mapping population to further dissect Fe-homeostasis.
The QTL confidence intervals of the traits NEC, RW, SDW/SL, SP3, SP4, and SP6, from a previous study contained hundreds of genes and millions of base pairs. A dissection of these QTL confidence intervals using association mapping methods allowed a confirmation of the previously detected QTLs as well as the fine-mapping. In addition, our study described SNPs which were significantly associated on a genome-wide level under both Fe regimes with the traits NEC, RW, SDW, H 2O, and SP3. Several of these SNPs were located in genes (coding) or recognition sites (non-coding) of transcriptional regulators, which indicates a direct impact on the phenotype. Beside being attractive targets for marker assisted selection, these loci are interesting objects for future functional analyses.
Plant material
A set of 302 maize inbred lines representing world-wide maize diversity [26] was used as association mapping population (ASMP) in the current study. Due to the unavailability of sufficient amounts of seeds for 35 inbred lines, a final set of 267 inbred lines was evaluated in the frame of this study (Additional file 7: Table S4).
Culture conditions and evaluated traits
Maize seeds were sterilized with 60°C hot water for 30 minutes. Afterwards, seeds were placed between two filter paper sheets moistened with saturated CaSO 4 solution for germination in the dark at room temperature. After 6 days, the germinated seeds were transplanted to a continuously aerated nutrient solution with nutrient concentrations as described by [27]. The plants were supplied with 100 μM Fe(III)-EDTA for 7 days. From day 14 to 28, plants were cultured at 10 (Fe-deficient) and 300 (Fe-sufficient) μM iron regimes. The nutrient solution was exchanged every third day. Plants were cultivated from day 7 to day 28 in a growth chamber at a relative humidity of 60%, light intensity of 170 μmol m −2 s −1 in the leaf canopy, and a day-night temperature regime of 16 h/24°C and 8 h/22°C, respectively.
Each genotype was grown in one shaded pot of 600 milliliter volume. All pots of one Fe regime were arranged in an alpha lattice design with 13 incomplete blocks. The entire experiment was replicated b= 3 times for the Fe-deficient and sufficient regime, respectively.
Under both Fe regimes, the following traits were evaluated: the relative chlorophyll content of the 3rd, 4th, 5th, and 6th leaf (SP) measured with a SPAD meter (Minolta SPAD 502). Branching at the terminal 5 cm of the root (BTR) was evaluated with 1 for strong presence and 9 for absence of terminal root branching. Leaf necrosis (NEC) was recorded as a visual score on a scale from 1 for high trait expression and 9 for low trait expression. The lateral root formation (LAT) was recorded on a scale from 1 for absence to 9 for high trait expression. Furthermore, root length (RL), root weight (RW), shoot length (SL), shoot dry weight (SDW), water content (H 2O) as well as the ratio between SDW and SL (SDW/SL) was according to [12].
In our study, the data collected in this way for both Fe regimes were not directly combined to calculate a response variable for each trait in order to avoid problems related to error propagation. Instead, we followed examples from the literature and analysed data from the regimes individually but compared the results afterwards.
SNP marker data
A data set with 437,650 SNP markers for the ASMP is publicly available from http://www.panzea.org. If for one SNP more than 20% of the marker information across all inbreds was unknown or denoted as missing data, this mSNP was skipped from the following analyses. Furthermore, SNPs with minor allele frequency lower than 2.5% were excluded from the following analyses.
A set of 16 candidate genes for mobilization, uptake, storage, and transport of Fe as well as regulatory function on these processes was selected for sequence analyses to detect additional polymorphisms compared to the above mentioned SNP data set (Additional file 2: Figure S1). Primers for candidate genes were designed using software Primer3 [28] (Additional file 8: Table S3). Each region of the candidate gene sequence was PCR amplified for the ASMP. PCR products were sequenced by the DNA core facility of the Max-Planck-Institute for Plant Breeding Research on Applied Biosystems (Weiterstadt, Germany) Abi 3730XL sequencers using BigDye-terminator v3.1 chemistry. Premixed reagents were from Applied Biosystems. The gene sequences were aligned with the software ClustalW2 (http://download.famouswhy.com/clustalw2/) and edited with BioLign (http://en.bio-soft.net/dna/BioLign.html) manually. The SNPs were filtered as described above and the remaining 562 SNPs were added to the above mentioned set of genome-wide distributed SNPs.
Phenotypic data analyses: The traits collected at each Fe regime were analyzed using the following mixed model:
$$y_{ikm}= \mu + g_{i} + r_{k}+ b_{km} + e_{ikm}{,} $$
where y ikm is the ith genotype of the kth replication in the mth incomplete block, μ the general mean, g i the effect of the ith genotype, r k the effect of the kth replication, b km the effect of the mth incomplete block in the kth replication, and e ikm the residual error. To estimate adjusted entry means (AEM) for all inbreds at each of two Fe regimes, we considered g as fixed as well as r and b as random. Furthermore, we considered g, r, and b as random to estimate the genotypic (\({\sigma }^{2}_{g}\)) and the error variance (\({\sigma }^{2}_{e}\)).
The repeatability H 2 for each Fe regime was calculated as:
$$H^{2}= \frac{{\sigma^{2}_{g}}}{{\sigma^{2}_{g}} + \frac{{\sigma^{2}_{e}}}{b}}{.} $$
The residuals for each trait under both Fe regimes were tested with a Kolmogorov-Smirnov test [29] for their normal distribution. Pairwise correlation coefficients were assessed between all pairs of traits for the ASMP. Student's t-tests were calculated for each trait to examine the significance of the difference between the Fe-deficient and sufficient regimes.
Association analyses: The AEM of each trait for each Fe regime were used to test their associations with each of the 287,390 SNP markers using the following mixed model:
$$M_{ip}= \mu + m_{p} + g^{*}_{i} + \sum^{z}_{u=1} Q_{iu}v_{u} + e_{ip}{,} $$
where M ip is the AEM of the ith maize inbred line carrying the pth allele, m p the effect of allele p, g ∗ i the residual genetic effect of the ith inbred line, v u the effect of the uth column of the population structure matrix Q [26], and e ip the residual [30]. The variance-covariance matrix of the vector of random effects g ∗=g ∗ 1,…,g ∗ 267 was assumed to be Var(g ∗) = 2K\(\sigma ^{2}_{g^{*}}\), where K was a 267 × 267 matrix of kinship coefficients among the ASMP [31], and \(\sigma ^{2}_{g^{*}}\) genetic variance estimated by REML. The relation between the population structure and the morphological and physiological traits was estimated using the 'EMMA' R package [31].
Physical map positions of QTL confidence intervals detected in the linkage mapping study of [12] were used for fine-mapping.
Multiple testing was considered by applying the [32] correction. The proportion of phenotypic variation explained by the significant SNPs was computed according to [33].
For each SNP of the marker set, the information about the physical map position was available. The extent of linkage disequilibrium in the maize ASMP which was estimated by [34] was used to determine the genes which are linked to the detected SNP in the association analysis: up and downstream of a significant association the genes included in the region 2,000 base pairs were extracted from the filtered gene set of the maize genome sequence version 5b.
If not stated differently, all analyses were performed using statistical software R [35].
Mori S: Iron acquisition by plants. Curr Opin Plant Biol1999, 2:250–253.
Marschner H: Mineral Nutrition of Higher Plants (Second Edition), UK: Elsevier; 1995.
Curie C, Briat JF: Iron transport and signaling in plants. Annu Rev Plant Biol2003, 54:183–206.
Guerinot M: It's elementary: Enhancing Fe 3+ reduction improves rice yields. Proc Nat Acad Sci USA2007, 104:7311–7312.
Vert G, Grotz N, Dédaldéchamp F, Gaymard F, Guerinot M, Briat JF, Curie C: IRT1, an Arabidopsis transporter essential for iron uptake from the soil and for plant growth. Plant Cell2002, 14:1223–1233.
Römheld V: Existence of two different strategies for the acquisition of iron in higher plants. In Iron Transport in Microbes, Plants and Animals. Edited by Winkelmann G, van der Helm D, Wiley-VCH. Federal Republic of Germany; 1987:353–374.
Curie C, Panaviene Z, Loulergue C, Dellaporta S, Briat JF, Walker E: Maize yellow stripe1 encodes a membrane protein directly involved in Fe III uptake. Nature2001, 409:346–349.
Kobayashi T, Nishizawa N: Iron uptake, translocation, and regulation in higher plants. Annu Rev Plant Biol2012, 63:131–152.
Lee S, Ryoo N, Jeon JS, Guerinot M, An G: Activation of rice Yellow stripe1-like 16 (OsYSL16) enhances iron efficiency. Mol Cells2012, 33:117–126.
Urbany C, Benke A, Marsian J, Huettel B, Reinhardt R, Stich B: Ups and downs of a transcriptional landscape shape iron deficiency associated chlorosis of the maize inbreds B73 and Mo17. BMC Plant Biol2013, 13:213.
Lee M, Sharopova N, Beavis W, Grant D, Katt M, Blair D, Hallauer A: Expanding the genetic map of maize with the intermated B73 x Mo17 (IBM) population. Plant Mol Biol2002, 48:453–461.
Benke A, Urbany C, Marsian J, Shi R, von Wirén N, Stich B: The genetic basis of natural variation for iron homeostasis in the maize IBM population. BMC Plant Biol2014, 14:12.
Nguyen V, Ribot S, Dolstra O, Niks R, Visser R, van der Linden C: Identification of quantitative trait loci for ion homeostasis and salt tolerance in barley (Hordeum vulgare L.) Mol Breeding2013, 31:137–152.
Long T, Tsukagoshi H, Busch W, Lahner B, Salt D, Benfey P: The bHLH transcription factor POPEYE regulates response to iron deficiency in arabidopsis roots. Plant Cell2010, 22:2219–2236.
Tijet N, Helvig C, Pinot F, Le Bouquin R, Lesot A, Durst F, Salaün JP, Benveniste I: Functional expression in yeast and characterization of a clofibrate-inducible plant cytochrome P-450 (CYP94A1) involved in cutin monomers synthesis. Biochem J1998, 332:583–589.
Colangelo E, Guerinot M: The essential basic helix-loop-helix protein FIT1 is required for the iron deficiency response. Plant Cell2004, 16:3400–3412.
Mizutani M, Ward E, Ohta D: Cytochrome p450 superfamily in Arabidopsis thaliana: Isolation of cDNAs, differential expression, and RFLP mapping of multiple cytochromes P450. Plant Mol Biol1998, 37:39–52.
Cho JI, Lee SK, Ko S, Kim HK, Jun SH, Lee YH, Seong H, Lee KW, An G, Hahn TR, Jeon JS: Molecular cloning and expression analysis of the cell-wall invertase gene family in rice (Oryza sativa L.) Plant Cell Rep2005, 24:225–236.
Nordin K, Vahala T, Palva E: Differential expression of two related, low-temperature-induced genes in Arabidopsis thaliana (L.) Heynh. Plant Mol Biol1993, 21:641–653.
Hundertmark M, Hincha D: LEA (Late Embryogenesis Abundant) proteins and their encoding genes in Arabidopsis thaliana. BMC Genomics2008:9–118. 9.
O'Rourke J, Charlson D, Gonzalez D, Vodkin L, Graham M, Cianzio S, Grusak M, Shoemaker R: Microarray analysis of iron deficiency chlorosis in near-isogenic soybean lines. BMC Genomics2007:8–476. 8.
Van Inghelandt D, Melchinger A, Martinant JP, Stich B: Genome-wide association mapping of flowering time and northern corn leaf blight (Setosphaeria turcica) resistance in a vast commercial maize germplasm set. BMC Plant Biol2012:12–56. 12.
Larsson S, Lipka A, Buckler E: Lessons from Dwarf8 on the strengths and weaknesses of structured association mapping. PLoS Genet2013, 9:e1003246.
Benke A, Stich B: An analysis of selection on candidate genes for regulation, mobilization, uptake, and transport of iron in maize. Genome2011, 54:674–683.
Andersen J, Lübberstedt T: Functional markers in plants. Trends Plant Sci2003, 8:554–560.
Flint-Garcia S, Thuillet AC, Yu J, Pressoir G, Romero S, Mitchell S, Doebley J, Kresovich S, Goodman M, Buckler E: Maize association population: A high-resolution platform for quantitative trait locus dissection. Plant J2005, 44:1054–1064.
von Wirén N, Marschner H, Römheld V: Roots of iron-efficient maize also absorb phytosiderophore-chelated zinc. Plant Physiol1996, 111:1119–1125.
Rozen S, Skaletsky H: Primer3 on the WWW for general users and for biologist programmers. Methods Mol Biol (Clifton, N.J.)2000, 132:365–386.
Chakravarti I, Laha R, Roy J: Handbook of Methods of Applied Statistics, Volume I, New York: John Wiley and Sons; 1967.
Stich B, Möhring J, Piepho HP, Heckenberger M, Buckler E, Melchinger A: Comparison of mixed-model approaches for association mapping. Genetics2008, 178:1745–1754.
Kang H, Zaitlen N, Wade C, Kirby A, Heckerman D, Daly M, Eskin E: Efficient control of population structure in model organism association mapping. Genetics2008, 178:1709–1723.
Benjamini Y, Hochberg Y: Controlling the False Discovery Rate: A Practical and Powerful Approach to Multiple Testing. J R Stat Soc. Ser B (Methodological)1995, 57:289–300.
Sun G, Zhu C, Kramer M, Yang SS, Song W, Piepho HP, Yu J: Variation explained in mixed-model association mapping. Heredity2010, 105:333–340.
Remington D, Thornsberry J, Matsuoka Y, Wilson L, Whitt S, Doebley J, Kresovich S, Goodman M, Buckler E, IV: Structure of linkage disequilibrium and phenotypic associations in the maize genome. Proc Nat Acad Sci USA2001, 98:11479–11484.
R Core Team: R: A Language and Environment for Statistical Computing, Vienna, Austria: R Foundation for Statistical Computing; 2012.
We would like to thank the North Central Regional Plant Introduction Station (NCRIPS) for providing seeds of the association mapping population. We also thank Nicole Kliche-Kamphaus, Andrea Lossow, Nele Kaul, and Isabel Scheibert for the excellent technical support. This work was supported by research grants from the Deutsche Forschungsgemeinschaft (STI596/4-1 and WI1728/16-1) and the Max Planck Society.
Max Planck Institute for Plant Breeding Research, Carl-von-Linné Weg 10, 50829 Köln, Germany
Andreas Benke
, Claude Urbany
& Benjamin Stich
Search for Andreas Benke in:
Search for Claude Urbany in:
Search for Benjamin Stich in:
Correspondence to Benjamin Stich.
AB and CU carried out the hydroponic growth of maize genotypes, tissue collection, and phenotype evaluation. AB analyzed the data. AB and BS drafted the manuscript. All authors read and approved the manuscript.
Table S1. Phenotypic variation (%r 2) explained by population structure and by kinship for the entire association mapping population set.
Figure S1. Genome-wide P values for association analysis under the Fe-sufficient regime using 267 maize inbred lines of the association mapping population. The horizontal line corresponds to a nominal significance threshold of 5% considering the Benjamini Hochberg correction for multiple testing. Traits with significant SNPs are represented: shoot water content (H 2O;A), root weight (RW;B), and SPAD value of leaf 3 (SP3;C).
Figure S3. Expected P values on the horizontal axis and observed P values on the vertical axis for the QQ plot analysis under the Fe-sufficient regime using 267 maize inbred lines of the association mapping population. The red line corresponds to a normal distribution. Traits with significant SNPs are represented: shoot water content (H 2O;A), root weight (RW;B), and SPAD value of leaf 3 (SP3;C).
Figure S2. Expected P values on the horizontal axis and observed P values on the vertical axis for the QQ plot analysis under the Fe-deficient regime using 267 maize inbred lines of the association mapping population. The red line corresponds to a normal distribution. Traits with significant SNPs are represented: leaf necrosis (NEC;A), root weight (RW;B), and shoot dry weight (SDW;C).
Figure S4. Genes sequenced in our study that are reported in the literature to be involved in Fe-homeostasis of maize.
Table S2. List of 267 maize genotypes comprising the source history, pedigree information, and assigned subpopulation.
Table S4. Primer list (forward: F; reverse: R) of sequenced amplicons with base pair (bp) length in B73. The annealing temperature (An. Temp) was empirically determined.
Table S3. Genome-wide P values for association analysis under the Fe-deficient regime using 267 maize inbred lines of the association mapping population. The horizontal line corresponds to a nominal significance threshold of 5% considering the Benjamini Hochberg correction for multiple testing. Traits with significant SNPs are represented: leaf necrosis (NEC;A), root weight (RW;B), and shoot dry weight (SDW;C).
Benke, A., Urbany, C. & Stich, B. Genome-wide association mapping of iron homeostasis in the maize association population. BMC Genet 16, 1 (2015) doi:10.1186/s12863-014-0153-0
Fe-efficiency
Association mapping population
Fine-mapping
Marker assisted selection
|
CommonCrawl
|
Why not send Voyager 3 and 4 following up the paths taken by Voyager 1 and 2 to re-transmit signals of later as they fly away from Earth?
Voyager 1 and Voyager 2 spacecrafts are on their journey out of solar system. They collected so much of important data that helped us understand our solar system. As these spacecrafts moving out of solar system they are still transmitting the information at much slower speed due to huge distance. Also the due to limitation power sources reducing the signal strength with with they can transmit the signal back to earth. It will not be long when the signals sent by these spacecrafts will be small enough that it would be difficult to differentiate from noise.
Why not send follow up spacecraft to both of these not just to act as mediator between them and earth but also to have the journey into outer reaches of solar system to gain more data.
communication voyager radio-communication
XinusXinus
$\begingroup$ related but not duplicate: Satellites around outer planets that act like amplifier to signals from voyager like objects The reason it is not a duplicate is that this question has heliocentric orbital trajectory aspects. $\endgroup$ – uhoh Apr 20 at 6:05
$\begingroup$ Consider the orbital mechanics. The Voyager probes are transmitting a narrow signal toward Earth. To intercept that signal the following spacecraft has to be on the path of the signal. If the following spacecraft is not expending (large amounts of) energy it cannot follow a straight path. $\endgroup$ – andy256 Apr 20 at 9:52
It's a great question!
To get a few decades more out of them, you can launch Voyagers 3 and 4 sometime around now and get by with a maximally-boosting flyby of Jupiter since you wouldn't target Saturn as well. If you had to wait for Jupiter and Saturn to line up with the original pair's trajectories again, it would be too long of a wait.
However, without Saturn, you'll eventually fall behind again, so this is a stop-gap measure.
I recommend you ask a new question if you'd like some detailed planning for spacecraft that could chase the Voyagers for communications relay purposes. There are a lot of considerations there and the question would have to be more specifically defined.
Link Budget
Let's look at the link budget.
In order to have a useful communications link, you need to receive a signal that's at least roughly the same strength as the local thermal noise of your receiver.
You calculate the ratio of the received power to the transmitted power in decibels by adding the gain of the transmitting and receiving antennas together, then subtracting the path loss. You can read more about that in this answer to the question How to calculate data rate of Voyager 1?
In order to get a bandwidth sufficient for 160 bits per second between Voyager 1 and Earth, you need a 3.66 meter dish on Voyager (48 dBi) and a 70 meter dish on Earth (~73 dBi). Even then you get about -150 dBm (-180 dBW) signal ($1 \times 10^{-18}$ Watts) and you need a liquid helium cooled receiver front-end to pick it up out of the noise.
You can read much more about Voyager communication with Earth in the DESCANSO Design and Performance Summary Series Article 4; Voyager Telecommunications
See also Why does DSN sometimes uses two dishes at the same time to receive Voyager-1?
If you wanted to double the range of Voyager 1 and 2 with Voyager 3 and 4, the second two would need 70 meter dishes that maintained sub-millimeter surface accuracy. This technology is certainly possible but it doesn't exist and would have to be developed.
According to answer(s) to What's the largest area dish antenna sent beyond the Earth-Moon system? the answer is only 4.6 meters (Galileo). For antennas deployed in cis-lunar space, see answers to What is the largest antenna deployed in space? show a few things larger, but these would not be suitable.
This kind of thing just isn't done, and it probably won't be, since optical communication is definitely the way to go in the near future. We've already had demonstrations from Earth to the Moon, and there are no known roadblocks to extending optical communications to deep space. Since the wavelength of light (about 1 micron) is so much smaller than the wavelengths used in deep space (centimeters, perhaps millimeters in the future) the "dish" shrinks from a huge steel monstrosity to the mirror of an optical telescope tens of centimeters in diameter. This can be managed quite nicely on a deep space probe.
For examples of similarly-sized optical telescopes that have already been in deep space, see answers to What's the largest aperture telescope sent beyond the Earth-Moon system?.
Once deep space optical communications is active, it may definitely be worth considering something like optical relay stations.
Not something you'd want to put in deep space!
Here is what a "steel monstrosity" looks like. This is the 70 meter dish at the Deep Space Network's Goldstone facility, there is also one in Australia and one in Spain. The red lines mark stairs and walkways, for scale.
One in space would of course be lighter, but keeping it stiff enough to produce an accurate surface figure may still make it too heavy to launch to deep space.
above: Photo credit JPMajor, creative commons CC BY-NC-SA 2.0.
above: From commons.wikimedia.org.
uhohuhoh
$\begingroup$ Admit it: The real concern is giving godlike alien probes extra systems to work with. $\endgroup$ – The Nate Apr 20 at 9:46
$\begingroup$ Ah, I was just about to make a V'ger related comment myself ... $\endgroup$ – Hagen von Eitzen Apr 20 at 9:50
$\begingroup$ @Cyclic3 Thanks for the edits! $\endgroup$ – uhoh Apr 20 at 9:57
$\begingroup$ I love how the super-intelligent machines that discovered Voyager 6 could build an enormous spacecraft to travel the galaxy back to its origin but couldn't wipe off a bit of corrosion on the the nameplate! $\endgroup$ – GdD Apr 20 at 16:49
$\begingroup$ space.stackexchange.com/questions/35698/… $\endgroup$ – Phil Frost Apr 21 at 14:32
You have a few problems with doing that which @uhoh unit has already elaborated. Even if you could surmount those you have a bigger problem, which is the Voyager probes will have to shut down their science instruments due to power constraints before a probe launched now would be able to get in a position to do any good. There are 4 instruments running on the 2 Voyagers, and in a couple of years 1 will have to be switched off on each because the RTGs are losing 4 Watts of power a year. By 2030 it's likely there won't be any instruments working at all, if we are lucky we will have some engineering data and enough signal to determine the probes direction and speed. A relay probe wouldn't add much value to that.
Rather than spend huge amounts of money sending spacecraft to interstellar space to relay the transmissions from dying spacecraft carrying 40 year old experiments it makes much more sense to spend huge amounts of money to send spacecraft with brand new experiments designed for that environment.
Voyager 1 don't have enough power to transmit signals Engineers at NASA have disabled some of it's components in order to save some power , if Voyager 1 can't transmit signals it would be useless to re-transmit it's not existing signals .
helloworldhelloworld
Not the answer you're looking for? Browse other questions tagged communication voyager radio-communication or ask your own question.
Why did Voyager 2's velocity drop far below escape velocity before the first gravity assist?
What's the largest aperture telescope sent beyond the Earth-Moon system?
How to calculate data rate of Voyager 1?
What is the largest antenna deployed in space?
Why does DSN sometimes uses two dishes at the same time to receive Voyager-1?
Quantitatively, why will optical communication be better than X-band for deep-space communications?
Satellites around outer planets that act like amplifier to signals from voyager like objects
What's the largest area dish antenna sent beyond the Earth-Moon system?
How does Voyager 1 send signals to Earth?
How do we know that Voyager's data is correct?
Can Voyager 1 receive signals from Earth?
Why has the Earth's motion carried it out of view of Pioneer 11's antenna?
Are the Voyager spacecrafts ever coming back?
Is two-way communication really required to use BeiDou GNSS?
What if the Voyagers had remained within the plane of the ecliptic?
Are the Voyager spacecrafts' X-band TWTAs currently set to high or low transmit power? How often were they changed?
How would the Voyagers finally die if allowed to transmit to the bitter end?
|
CommonCrawl
|
Temporal changes in the distinct scattered wave packets associated with earthquake swarm activity beneath the Moriyoshi-zan volcano, northeastern Japan
Yuta Amezawa ORCID: orcid.org/0000-0002-0914-07771,
Masahiro Kosuga1 &
Takuto Maeda1
We investigated temporal changes in the waveforms of S-coda from triggered earthquakes around the Moriyoshi-zan volcano in northeastern Japan. Seismicity in the area has drastically increased after the 2011 off the Pacific coast of Tohoku earthquake, forming the largest cluster to the north of the volcano. We analyzed distinct scattered wave packets (DSW) that are S-to-S scattered waves from the mid-crust and appeared predominantly at the high frequency range. We first investigated the variation of DSW for event groups with short inter-event distances and high cross-correlation coefficients (CC) in the time window of direct waves. Despite the above restriction, DSW showed temporal changes in their amplitudes and shapes. The change occurred gradually in some cases, but temporal trends were much more complicated in many cases. We also found that the shape of DSW changed in a very short period of time, for example, within ~ 12 h. Next, we estimated the location of the origin of the DSW (DSW origin) by applying the semblance analysis to the data of the temporary small-aperture array deployed to the north of the largest cluster of triggered events. The DSW origin is located between the largest cluster within which hypocentral migration had occurred and the low-velocity zone depicted by a tomographic study. This spatial distribution implies that the DSW origin was composed of geofluid-accumulated midway in the upward fluid movement from the low-velocity zone to the earthquake cluster. Though we could not entirely exclude the possibility of the effect of the event location and focal mechanisms, the temporal changes in DSW waveforms possibly reflect the temporal changes in scattering properties in and/or near the origin. The quick change in DSW waveforms implies that fast movement of geofluid can occur at the depth of the mid-crust.
Seismic swarms consist of small earthquakes that cause no damage, but they attract much attention from seismologists who are keen to study the contributions of geofluid to the generation of earthquakes. Swarm activity occurs in areas near volcanoes, around injection wells, and sometimes in locations apart from big earthquakes. Among these types of activity, the induced earthquakes caused by artificial fluid injections for wastewater disposal or enhancement of geothermal systems have received not only scientific interest, but social concern as well because often these occur in places where no notable seismicity has occurred prior to the injection (e.g., Deichmann and Giardini 2009; Ellsworth 2013; Rubinstein and Mahani 2015). The earlier analyses of injection-induced earthquakes have enabled us to infer the behavior of fluid in the crust (e.g., Shapiro et al. 2002; Terakawa 2014; Mukuhira et al. 2017).
A recent example of triggered seismicity after big earthquakes is the seismic swarms in many areas in Japan after the 2011 off the Pacific coast of Tohoku earthquake (Mw9.0; hereafter referred to as the Tohoku earthquake) (e.g., Okada et al. 2011, 2015; Kosuga et al. 2012; Terakawa et al. 2013; Yoshida et al. 2019). Hypocenter migration is a common feature of triggered swarms (e.g., Okada et al. 2015). Since the migration is often observed during fluid injection experiments (e.g., Shapiro et al. 1997), and with seismic swarms at volcanoes (e.g., Battaglia et al. 2005; Yukutake et al. 2011, Shelly et al. 2015, 2016) as well, many researchers have suggested the evolution of pore fluid pressure as the cause of the triggered seismicity (Kosuga 2014; Okada et al. 2015; Yoshida and Hasegawa 2018a, b).
One of the most active swarms triggered by the Tohoku earthquake occurred in the area around the Quaternary Moriyoshi-zan volcano in northern Tohoku (Fig. 1). The seismic activity started about 2 months after the Tohoku earthquake and has continued for more than 8 years. Hypocenter migration was observed in the most massive cluster located at approximately 5 km to the north of the volcano (Fig. 1b). The migration started from the bottom of the southeastern part of the cluster (red dots in Fig. 1c) to the northeastern part and changed direction to the west. The activity was illuminated by several bursts of small clusters and showed upward migration. Detailed features of the spatiotemporal migration were described by Kosuga (2014).
Hypocenter distribution around the Moriyoshi-zan volcano. a Index map. b Areal map of the study area showing shallow seismicity (≤ 15 km) before and after the Tohoku earthquake (blue crosses: January 2006–March 2011; red circles: March 2011–November 2016) from the unified catalog of the Japan Meteorological Agency (JMA). The solid rectangle represents the largest cluster in which hypocenter migration was observed. The broken rectangle represents the area shown in Figs. 2, 8, and 9. Green and orange inverted triangles denote Hi-net and temporary seismic stations, respectively. c Relocated hypocenter distribution in the largest cluster in b. Color of the circles shows the chronological sequential number of earthquakes in the period from March 2011 to November 2016. d Magnitude versus sequential number plot
A notable feature of seismograms from the triggered earthquakes around the Moriyoshi-zan volcano is the appearance of distinct wave packets after the arrival of S-waves (Fig. 2). The waveform of the packets is considerably different from that of direct waves: they do not have clear onset, and they always appear almost same timing after the direct S-wave onset. In addition, the packets have much longer duration than that of the direct S-waves. All of these characteristics suggest that the packets are scattered waves originated from medium inhomogeneities rather than independent microearthquakes or reflected waves. Hereafter, we refer these packets as distinct scattered wave packets (DSW). By using the time difference between S-waves and DSW among several clusters, Kosuga (2014) estimated the scatterer locations assuming S-to-S scattering. We can see the inter-event variation of DSW in Figure 10 of Kosuga (2014), but the factor of this variation has not been well studied.
An example of DSW (distinct scattered wave packets). a Location of the epicenter (yellow star) and stations (inverted triangles). Red circles and black triangle denote the epicenter of earthquakes and the location of the Moriyoshi-zan volcano, respectively. b–d Three-component band-pass filtered (8–32 Hz) seismograms from the earthquake (November 9, 2012, 03:46:37 JST (UT + 9), M2.7) observed at the stations shown in a. HR.MAS1 is one of the component stations of the array. The amplitude is normalized to the maximum among the three components. DSW appears in the black rectangles
In this study, we report on the features of DSW, in particular, temporal changes in waveforms observed around the Moriyoshi-zan volcano. We first collect seismograms from nearly the same source location and analyze the spatial and temporal variations. Then, we estimate the locations of the origin of DSW (hereafter referred to as DSW origin) by using the semblance analysis technique applied to data from a temporary small-aperture array. Finally, considering the behavior of temporal changes in DSW and the estimated DSW origin, we discuss the convincing factors for the temporal changes in DSW.
DSW and temporal changes
Waveform types of DSW
Figure 2 shows examples of DSW observed at a Hi-net (National Research Institute for Earth Science and Disaster Resilience 2019) station N.ANIH and two temporary stations HR.MAS1 (one of the component stations of the array) and HR.MRY. The scattered phase had an approximately 1 s duration and S-wave like behavior (Kosuga 2014). Though the time difference between S-waves and DSW was almost constant among the events in each cluster, waveforms of DSW varied considerably (Figure 10 of Kosuga 2014).
The waveforms of DSW were roughly classified into the following three types: (A) single peak, (B) double peaks, and (C) unclear. We performed this classification by visual inspection of the seismograms observed at N.ANIH during the period from May 2011 to November 2016. The resultant proportion of each type was (A) 33.8%, (B) 63.0%, and (C) 3.2%.
To investigate if the earthquakes of the three types were separated in space, we relocated the hypocenters by the HypoDD method (Waldhauser and Ellsworth 2000). We followed the procedures in Kosuga (2014) but for the events with the extended period from March 2011 to November 2016. We used catalog data only. Travel-time data were mostly taken from the unified catalog of the Japan Meteorological Agency (JMA), but we added manually picked data from one of the authors for temporary stations. In total, we used 115,212 picks from the JMA catalog and 13,813 manual picks. The relocated hypocenters (e.g., Fig. 3) showed a strongly clustered distribution dipping eastward, as previously reported by Kosuga (2014).
Three types of DSW and the corresponding hypocenter distribution. Top figures represent seismograms with DSW classified as a single peak (a), double peaks (b), and unclear (c), respectively. Middle and bottom figures show epicenter and depth distribution of earthquakes. Colored and gray crosses denote the hypocenters of each type of DSW and whole earthquakes, respectively
Hypocenter distributions of events classified into the three types showed that there were no clear separations among the types of DSW (Fig. 3). The earthquakes almost overlapped between types A and B, though earthquakes of type B formed some concentrated clusters. The hypocenters of type C were localized on the northeastern side of the whole events space, but they were less than the numbers of other types. These facts indicate that the waveform type is insensitive to the hypocentral location. This may be partly related to the insufficient accuracy of the event location.
Temporal changes in DSW
To further examine the waveforms of DSW for collocated events, we used the following two thresholds: inter-event distance and cross-correlation coefficient of the direct waves. We collected events in a grid with a size of 1 km for the N–S, E–W, and depth directions. Then, after applying a band-pass filter of 2–16 Hz, we calculated the cross-correlation coefficient (CC) of both the direct P- and S-waves for all event pairs in the grid. We calculated the CC from the vertical component for P-waves and from the two horizontal components for S-waves, at three Hi-net stations surrounding the grid (N.ANIH, N.KZNH, N.GJOH; Fig. 1b). The length of the time window was 1 s. We adopted event pairs having an average CC ≥ 0.85 as members of a group with similar waveforms. We repeated this procedure by moving the grid with an overlap of 0.5 km to the neighboring grid.
Figure 4 shows examples of the envelopes belonging to some groups. To improve visibility of the DSW, we applied an auto gain control technique to the seismogram envelope. In this technique, the envelope amplitude is modified by multiplying coefficient reciprocal to the RMS amplitude of the envelope within a moving time window of 0.1 s, resulting in the enhancement of DSW that have much smaller amplitude than the direct S-waves.
N–S component band-pass filtered (8–24 Hz) envelopes for events with similar waveform of direct waves at hypocenter grid numbers of a 122, b 125, c 132, d 159, e 164, and f 167. Gray lines on the top in each panel show the overwriting of envelopes. Colored envelopes were modified by applying auto gain control to improve the visibility of the DSW. Color of circles on the left-hand side of envelopes shows the year of event occurrence. Color of circles and squares on the right-hand side of envelopes represents the cross-correlation for direct wave and DSW to the reference event marked by a star. Green lines on the time axis indicate the time window of CC calculations. All traces are aligned with P-wave onset
Since DSW are dominant in a higher frequency range than that of the CC calculations (2–16 Hz), we show band-pass filtered envelopes with the passband of 8–24 Hz. The hypocenter distributions of the events displayed in Fig. 4 are shown in Fig. 5.
Relocated hypocenter distribution of the events in groups a–f with similar waveform shown in Fig. 4 (solid red circles). Gray circles denote the hypocenter distribution of whole earthquakes
In general, the waveforms of P- and S-waves were very similar as we applied two thresholds to ensure the event collocation. Some peaks in P-coda (Fig. 4b, d) were found to have been due to successive earthquakes. Despite the similarity of direct waves, many DSW showed temporal changes in their amplitudes and shapes. For example, in Fig. 4a, the amplitudes of DSW increased with time, and the shapes of DSW changed from a type of unclear peak to another type of clear double peaks. This was a case of systematic change. However, most changes in DSW were much more complicated (e.g., Fig. 4d, e). We also found that DSW shapes changed significantly in a very short time interval, for example, within ~ 12 h (Fig. 6).
Envelopes and hypocenter distribution of the two groups that showed short-term changes in the DSW part. a, b are the same as Fig. 4, but without the overwriting of envelopes. c, d are the relocated hypocenter distributions of the events in groups (a) and (b) (red solid circles). Gray circles denote hypocenter distribution of whole earthquakes
Observed significant changes in DSW can be attributed to either the source effect or the scatterer effect. We will discuss this again after estimating the DSW origin.
The origin of DSW
Array data
We estimated the DSW origin by using data from a small seismometer array operated during the period from November 2012 to May 2014. Kosuga (2014) used a portion of this dataset for hypocenter location and envelope analyses. The L-shaped array located at approximately 10 km north of the Moriyoshi-zan volcano had nine stations with an average spacing of about 150 m and an arm length of about 700 m (Fig. 7). The maximum height difference among the stations was 49 m. Each station was equipped with 1-Hz three-component seismometers. The recording was continuous with a sampling frequency of 100 Hz and a 24-bit resolution.
a Location of the array. Green and orange inverted triangles represent Hi-net and temporary seismic stations, respectively. Red circles denote epicenters of earthquakes occurred during the period from March 2011 to November 2016. b Station distribution of the array. Orange inverted triangles represent the component stations of the array
Semblance analysis
To estimate the apparent slowness and the incident azimuth of DSW, we applied the semblance analysis technique (Neidell and Taner 1971) to DSW observed by the array. We estimated these values by calculating the semblance value given by
$$ S\left( {t,s_{x} ,s_{y} } \right) = \frac{{\mathop \sum \nolimits_{j = 1}^{M} \left[ {\mathop \sum \nolimits_{i = 1}^{N} u_{i} \left( {t_{j} - \tau_{i} + T_{0i} } \right)} \right]^{2} }}{{N\mathop \sum \nolimits_{j = 1}^{M} \mathop \sum \nolimits_{i = 1}^{N} u_{i} \left( {t_{j} - \tau_{i} + T_{0i} } \right)^{2} }}, $$
where \( {s_{x}} \) and \( {s_{y}} \) are the x- and y-component of the apparent slowness, respectively, N is the number of stations, \( u_{i} \left( {t_{j} } \right) \) is the amplitude of the waveform observed at discretized time \( t_{j} \) at the ith station, M is the number of samples in the time window, and \( \tau_{i} \) is the travel time difference between the ith and reference stations. Time \( T_{0i} \) denotes a travel-time correction to compensate for the differences in the altitudes of array stations. The length of the time window was set to 0.33 s. The semblance values were calculated for band-pass filtered seismograms with a passband of 3–12 Hz and by shifting time windows with an interval of 0.067 s. In each time window, semblance values were calculated for a range of slowness, \( - 0.5 \;{\text{s}}/{\text{km}} \le s_{x} ,\;\; s_{y} \le 0.5 \;{\text{s}}/{\text{km}} \), and with a step of 0.025 s/km.
Figure 8 shows an example of the semblance analysis. The semblance values for P- and S-waves have sharp peaks both in the incident azimuth and apparent slowness. The incident azimuth agrees well with the back azimuth of the epicenter (Fig. 8c, pink line). About 3 s after the S-wave arrival, DSW appeared with lower apparent slowness than that of the S-wave, especially in the horizontal components. The incident azimuth and apparent slowness of DSW were estimated to be ~ 180° N and ~ 0.15 s/km, respectively. The average of maximum semblance values for DSW from the E–W component was higher than that from the N–S component. Thus, we used the results from the E–W component for the following analysis.
An example of semblance analysis for three-component seismograms from the earthquake (November 25, 2012, 07:55:17 JST (UT + 9), M2.8). Four panels in each component are, from top to bottom, seismograms (a, e, and i), slowness (b, f, and j), back-azimuth (c, g, and k), and maximum semblance value together with the RMS envelope (d, h, and l), respectively. The background color in slowness and back-azimuth plots represents semblance values. Circles in the same panels denote the slowness and back-azimuth with the maximum semblance value in every time step. The red and green arrows indicate the arrival times of P- and S-waves, respectively. Pink lines in c, g, and k indicate the back-azimuth to the event epicenter
Estimation of the DSW origin
Next, we located the DSW origin under the assumption of S-to-S single scattering. The data were the estimated values of incident azimuth and incident angle for DSW with semblance values larger than or equal to 90% of the maximum value (0.90). The time differences between the arrival time of P-wave and DSW were set as 3.8–4.5 s taking into account the range of the arrival times of DSW. We divided up the area around the array (39.9° N–40.2° N, 140.35° E–140.70° E, depth 0–15 km) into grids of 0.005° × 0.005° × 2 km. We then calculated the incident azimuth, incident angle, and travel time difference between P-waves and DSW for a hypothetical scatterer location at a particular grid point. We added a score for grid points which satisfy the criteria of (1) slowness of DSW estimated by semblance analysis, (2) back-azimuth of DSW estimated by semblance analysis, and (3) arrival time difference between P-wave and DSW. We performed this procedure for each event we analyzed and obtained the map of scores. For the calculation of travel time, we assumed a homogeneous medium with Vp = 5.8 km/s and Vs = 3.5 km/s. By repeating this procedure for all combinations of grid points and swarm events, we obtained a spatial distribution of scores, from which we estimated the DSW origin as grid points with high scores.
Figure 9 shows the spatial distribution of scores. An area of high scores, i.e., the DSW origin was located about 5 km north of the Moriyoshi-zan volcano and an approximate depth of 13 km. The DSW origin estimated in this study had shifted about 5 km northeast from the best-estimated location proposed by Kosuga (2014), who used a method analogous to the source-scanning algorithm (Kao and Shan 2004). Though the difference between the two locations was in a range of high brightness obtained by the source scanning, we think that the azimuthal restriction from the array analysis was better.
Location of the DSW origin. Blue diamonds denote the accumulated scores given to hypothetical scattering points with the allowable difference between the observed and calculated incident azimuth, incident angle, and travel times. The size of the symbol is proportional to the score. The red circles denote hypocenters. Green and orange inverted triangles are Hi-net and temporary stations, respectively
Interpretation of the estimated DSW origin
The estimated DSW origin was beneath the northern seismic cluster in which hypocentral migration was observed (Fig. 9). According to tomographic results around the Moriyoshi-zan volcano (Okada et al. 2015), there is a low-velocity zone just below the DSW origin, which implies the existence of geofluid.
Before the Tohoku earthquake, the dominant type of focal mechanisms in the Tohoku region was E–W compressional reverse fault that reflects westward subduction of the Pacific plate. However, after the Tohoku earthquake, a significant number of strike-slip earthquakes occurred not only in the investigated area but in the other area of triggered seismicity located about 50 km to the south (e.g., Kosuga et al. 2012; Okada et al. 2015). This change means that the direction of \( \sigma_{2} \) became vertical. According to Sibson (1996), strong directional permeability may develop in the \( \sigma_{2} \) direction within the mesh structure. Yoshida et al. (2012) calculated the coseismic stress change as small as less than 1 MPa in the northern Tohoku region. To explain the change in the focal mechanisms under such small stress fluctuation, Terakawa et al. (2013) suggested the reactivation of pre-existing faults due to the increase in pore fluid pressure. Thus, if the structural condition is met in the Moriyoshi-zan area, geofluid will tend to move upward from the low-velocity zone to the source area of triggered seismicity. Upward hypocenter migration of the seismic cluster just above the DSW origin (Fig. 1c) also supports the hypothesis.
The plausible factors for the temporal changes in DSW
In this study, we used waveform-correlation thresholds to group the events within a cluster. Even by choosing the events with similar waveforms for direct waves and with close distances of hypocenters, we found that the DSW waveforms were considerably different from each other (Figs. 4 and 6). These changes can be more plausibly attributed to the path effect rather than the differences in source location or focal mechanisms.
Possible path effects are the temporal change in the attenuation and/or scattering properties of the DSW origin. Temporal changes in the attenuation coefficient were reported under a fluid-rich condition. Wcisło et al. (2018) reported a statistically significant temporal decrease in the effective attenuation coefficient that is consistent with an observed increase in CO2 upwelling flow within about 1.5 months during the 2008 West Bohemia seismic swarm. Matsumoto et al. (2001) reported a quick temporal change in the location of the scattering source around the Iwate volcano, northeastern Japan. They found that the scattering source moved several kilometers in the period before and after an M6.1 earthquake, a time frame of about 3 months, and interpreted the change as possible fluid migration. These results may be attributed to temporal change in geofluid distribution, as fluid can move with time. Since the waveforms of DSW examined in this study sometimes changed over short intervals of hours or days (Fig. 6), the fluid must have moved very fast.
Kosuga (2014) estimated the hydraulic diffusivity (D) (Shapiro et al. 1997) from the hypocenter migration observed in the same cluster that we examined in this study. The diffusivity D amounted to ~ 0.01 m2/s across the entire investigation period, and the values were in the range of 0.05 to 0.1 m2/s for the initial stage of seismic activity. He noted that the diffusivity varied from 0.01 to 0.7 m2/s over shorter time intervals. These values are comparable to those in other cases, for example, a value of 0.5 m2/s derived from injection-induced seismicity (Shapiro et al. 1997) and values of 0.5–1.0 m2/s (Yukutake et al. 2011) and 0.25–0.5 m2/s (Shelly et al. 2015) derived from swarm activity around active volcanoes. In Fig. 4, we can see that the peak of DSW fluctuates in the range of 0.1–0.3 s. If we assume the fluctuation reflects the position of the DSW origin, the above range corresponds to a position shift of 0.35–1.05 km, assuming Vs = 3.5 km/s. By using the highest value of diffusivity around the Moriyoshi-zan volcano (0.7 m2/s), the time for earthquake migration over the above distances was estimated to have ranged from 3.9 to 34 h, which is consistent with the time interval for short-term shape changes of DSW. Note that the time is for the expansion of the migration front and not for fluid movement; however, rough agreement in terms of the order of the above times suggests that geofluid can move quickly and contribute to the temporal variation of DSW. One of the other plausible causes of rapid change in the DSW is the drastic spatiotemporal change in the elastic properties of the DSW origin. Migration of volatile component such as CO2 in geofluid can lead the change as suggested by Wcisło et al. (2018).
Recently, Zhu et al. (2019) reported that they could monitor a continuous velocity reduction by analyzing scattering coda waves during the dynamic injection of CO2 in Texas. Similar monitoring can be done in other areas using natural earthquakes, though the source location is not unique. This study shows that the monitoring of coda waves has the potential to reveal the spatiotemporal evolution of scatterers. Required refinement, for example, rigorous event selection and statistical grouping of DSW, is the next step of this research.
We investigated the temporal changes in the waveforms of DSW observed around the Moriyoshi-zan volcano in northern Tohoku by using the grouped events of similar waveforms for direct waves. Despite the similarity of direct waves, many of the DSW showed temporal changes in their amplitudes and shapes. In some cases, the change occurred gradually. However, most changes in DSW were much more complicated. We also found that the shape of DSW changed within a very short time interval of ~ 12 h. We then estimated the DSW origin from semblance analysis. The location was beneath the northern seismic cluster in which hypocentral migration was observed, and just above the low-velocity zone beneath the volcano. This spatial distribution suggests the existence of geofluid in the DSW origin. Though we could not entirely exclude the possibility of effects of the event location and focal mechanisms, we gave preference to the change in scattering properties in and/or near the origin because of its proximity to the low-velocity zone and earthquake cluster. The observations of rapid temporal changes in DSW waveforms are suggestive of the fast movement of geofluid in and around the DSW origin, which provides an interesting target for further studies.
The continuous seismic waveform data of Hi-net used in this study are available via the National Research Institute for Earth Science and Disaster Resilience (https://doi.org/10.17598/nied.0003). The HypoDD (Waldhauser and Ellsworth 2000) program code was obtained from an established website (https://www.ldeo.columbia.edu/~felixw/hypoDD.html). Portions of the travel time data for relocating hypocenter locations are available via Japan Meteorological Agency (http://www.data.jma.go.jp/svd/eqev/data/bulletin/). Topography data used for constructing the figures were downloaded from the Geospatial Information Authority of Japan (http://www.gsi.go.jp/kiban/index.html). Figures in this paper were drawn by using the Generic Mapping Tools (GMT) developed by Wessel and Smith (1998). We obtained the GISMO Toolbox-Seismic Data Analysis in MATLAB developed by Thompson and Reyes (2017) from an established website (https://github.com/geoscience-community-codes/GISMO/wiki/GISMO-Analytics) (accessed June 2018), and this was used for the waveform analysis. We obtained The Seismic Analysis Code (SAC) developed by Goldstein and Snoke (2005) from an established website (http://ds.iris.edu/ds/nodes/dmc/software/downloads/sac/), and this was used for the waveform analysis.
DSW:
distinct scattered wave packets
cross-correlation coefficient
RMS:
root mean square
JMA:
Japan Meteorological Agency
NIED:
National Research Institute for Earth Science and Disaster Resilience
Battaglia J, Ferrazzini V, Staudacher T, Aki K, Cheminée JL (2005) Pre-eruptive migration of earthquakes at the Piton de la Fournaise volcano (Réunion Island). Geophys J Int 161:549–558. https://doi.org/10.1111/j.1365-246X.2005.02606.x
Deichmann N, Giardini D (2009) Earthquakes induced by the stimulation of an enhanced geothermal system below Basel (Switzerland). Seismol Res Lett 80:784–798. https://doi.org/10.1785/gssrl.80.5.784
Ellsworth WL (2013) Injection-induced earthquakes. Science 341(6142):1225942. https://doi.org/10.1126/science.1225942
Goldstein P, Snoke A (2005) SAC availability for the IRIS community. Incorporated Institutions for Seismology Data Management Center Electronic Newsletter
Kao H, Shan SJ (2004) The source-scanning algorithm: mapping the distribution of seismic sources in time and space. Geophys J Int 157:589–594. https://doi.org/10.1111/j.1365-246X.2004.02276.x
Kosuga M (2014) Seismic activity near the Moriyoshi-zan volcano in Akita Prefecture, northeastern Japan: implications for geofluid migration and a midcrustal geofluid reservoir. Earth Planets Space 66:77. https://doi.org/10.1186/1880-5981-66-77
Kosuga M, Watanabe K, Hashimoto K, Kasai H (2012) Seismicity in the northern part of Tohoku District induced by the 2011 off the Pacific Coast of Tohoku Earthquake. Zisin (J Seismol Soc Jpn 2nd Ser) 65:69–83. https://doi.org/10.4294/zisin.65.69 (in Japanese with English abstract)
Matsumoto S, Obara K, Yoshimoto K, Saito T, Ito A, Hasegawa A (2001) Temporal change in P-wave scatterer distribution associated with the M 6.1 earthquake near Iwate volcano, northeastern Japan. Geophys J Int 145:48–58. https://doi.org/10.1111/j.1365-246X.2001.00339.x
National Research Institute for Earth Science and Disaster Resilience (2019), NIED Hi-net, National Research Institute for Earth Science and Disaster Resilience, https://doi.org/10.17598/nied.0003
Neidell NS, Taner MT (1971) Semblance and other coherency measures for multichannel data. Geophysics 36:482–497. https://doi.org/10.1190/1.1440186
Okada T, Yoshida K, Ueki S, Nakajima J, Uchida N, Matsuzawa T, Umino N, Hasegawa A (2011) Shallow inland earthquakes in NE Japan possibly triggered by the 2011 off the Pacific coast of Tohoku Earthquake. Earth Planets Space 63:44. https://doi.org/10.5047/eps.2011.06.027
Okada T, Matsuzawa T, Umino N, Takahashi H, Yamada T, Kosuga M, Takeda T, Kato A, Igarashi T, Obara K, Sakai S, Saiga A, Iidaka T, Iwasaki T, Hirata N, Tsumura N, Yamanaka Y, Terakawa T, Nakamichi H, Okuda T, Horikawa S, Katao H, Miura T, Kubo A, Matsushima T, Goto K, Miyamachi H (2015) Hypocenter migration and crustal seismic velocity distribution observed for the inland earthquake swarms induced by the 2011 Tohoku-Oki earthquake in NE Japan: implications for crustal fluid distribution and crustal permeability. Geofluids 15:293–309. https://doi.org/10.1111/gfl.12112
Rubinstein JL, Mahani AB (2015) Myths and facts on wastewater injection, hydraulic fracturing, enhanced oil recovery, and induced seismicity. Seismol Res Lett 86:1060–1067. https://doi.org/10.1785/0220150067
Shapiro SA, Huenges E, Borm G (1997) Estimating the crust permeability from fluid-injection-induced seismic emission at the KTB site. Geophys J Int 131:15–18. https://doi.org/10.1111/j.1365-246X.1997.tb01215.x
Shapiro SA, Rothert E, Rath V, Rindschwentner J (2002) Characterization of fluid transport properties of reservoirs using induced microseismicity. Geophysics 67:212–220. https://doi.org/10.1190/1.1451597
Shelly DR, Taira TA, Prejean SG, Hill DP, Dreger DS (2015) Fluid-faulting interactions: fracture-mesh and fault-valve behavior in the February 2014 Mammoth Mountain, California, earthquake swarm. Geophys Res Lett 42:5803–5812. https://doi.org/10.1002/2015GL064325
Shelly DR, Ellsworth WL, Hill DP (2016) Fluid-faulting evolution in high definition: connecting fault structure and frequency-magnitude variations during the 2014 Long Valley Caldera, California earthquake swarm. J Geophys Res Solid Earth 121:1776–1795. https://doi.org/10.1002/2015JB012719
Sibson RH (1996) Structural permeability of fluid-driven fault-fracture meshes. J Struct Geol 18:1031–1042. https://doi.org/10.1016/0191-8141(96)00032-6
Terakawa T (2014) Evolution of pore fluid pressures in a stimulated geothermal reservoir inferred from earthquake focal mechanisms. Geophys Res Lett 41:7468–7476. https://doi.org/10.1002/2014GL061908
Terakawa T, Hashimoto C, Matsu'ura M (2013) Changes in seismic activity following the 2011 Tohoku-oki earthquake: effects of pore fluid pressure. Earth Planet Sci Lett 365:17–24. https://doi.org/10.1016/j.eosl.2013.01.017
Thompson G, Reyes C (2017) GISMO—a seismic data analysis toolbox for MATLAB [Software Package]
Waldhauser F, Ellsworth WL (2000) A double-difference earthquake location algorithm: method and application to the northern Hayward fault, California. Bull Seismol Soc Am 90:1353–1368. https://doi.org/10.1785/0120000006
Wcisło M, Eisner L, Málek J, Fischer T, Vlček J, Kletetschka G (2018) Attenuation in West Bohemia: evidence of high attenuation in the Nový Kostel focal zone and temporal change consistent with CO2 degassing. Bull Seismol Soc Am 108:450–458. https://doi.org/10.1785/0120170168
Wessel P, Smith WHF (1998) New, improved version of the Generic Mapping Tools releases. Eos Trans AGU 79:579
Yoshida K, Hasegawa A (2018a) Hypocenter migration and seismicity pattern change in the Yamagata-Fukushima border, NE Japan, caused by fluid movement and pore pressure variation. J Geophys Res Solid Earth 123:5000–5017. https://doi.org/10.1029/2018JB015468
Yoshida K, Hasegawa A (2018b) Sendai-Okura earthquake swarm induced by the 2011 Tohoku-Oki earthquake in the stress shadow of NE Japan: detailed fault structure and hypocenter migration. Tectonophysics 733:132–147. https://doi.org/10.1016/j.tecto.2017.12.031
Yoshida K, Hasegawa A, Okada T, Iinuma T, Ito Y, Asano Y (2012) Stress before and after the 2011 great Tohoku-oki earthquake and induced earthquakes in inland areas of eastern Japan. Geophys Res Lett 39:L03302. https://doi.org/10.1029/2011GL049729
Yoshida K, Hasegawa A, Yoshida T, Matsuzawa T (2019) Heterogeneities in stress and strength in tohoku and its relationship with earthquake sequences triggered by the 2011 M9 Tohoku-Oki earthquake. Pure Appl Geophys 176:1335–1355. https://doi.org/10.1007/s00024-018-2073-9
Yukutake Y, Ito H, Honda R, Harada M, Tanada T, Yoshida A (2011) Fluid-induced swarm earthquake sequence revealed by precisely determined hypocenters and focal mechanisms in the 2009 activity at Hakone volcano, Japan. J Geophys Res Solid Earth 116:B4308. https://doi.org/10.1029/2010JB008036
Zhu T, Ajo-Franklin J, Daley TM, Marone C (2019) Dynamics of geologic CO2 storage and plume motion revealed by seismic coda waves. PNAS 116:2464–2469. https://doi.org/10.1073/pnas.1810903116
We thank Takahiro Shiina of the Earthquake Research Institute, University of Tokyo, for very valuable discussions that helped improve this study. We are grateful to two anonymous reviewers for their comments, which were very helpful for improving manuscript.
This study was supported by the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan, under its Earthquake and Volcano Hazards Observation and Research Program (HRS_01). This study was supported by JSPS KAKENHI Grant Numbers 21109002 and 15H01135.
Graduate School of Science and Technology, Hirosaki University, 3 Bunkyo-cho, Hirosaki, Aomori, 036-8561, Japan
Yuta Amezawa, Masahiro Kosuga & Takuto Maeda
Yuta Amezawa
Masahiro Kosuga
Takuto Maeda
MK conceived the presented idea and performed array observations. YA and MK performed the data analyses. All authors contributed to the discussion and writing of the manuscript. All authors read and approved the final manuscript.
Correspondence to Yuta Amezawa.
Amezawa, Y., Kosuga, M. & Maeda, T. Temporal changes in the distinct scattered wave packets associated with earthquake swarm activity beneath the Moriyoshi-zan volcano, northeastern Japan. Earth Planets Space 71, 132 (2019). https://doi.org/10.1186/s40623-019-1115-6
Coda waves
Triggered earthquakes
Seismic swarm
Geofluid
The 2011 off the Pacific coast of Tohoku earthquake
|
CommonCrawl
|
best process improvement certifications » mechanical extraction method » vector taylor expansion
vector taylor expansion
The Taylor series expansion is a widely used method for approximating a complicated function by a polynomial. 1. The new mean vector is computed as In a similar fashion, the new .
To obtain notationally uncluttered expressions for higher order expansions, one switches to the use of tensor notation. The Taylor expansion of the ith component is: The first two terms of these components can be written in vector form: (The case where fis scalar-valued or w is a scalar can be handled with a 1-dimensional vector.) 2. Using Symmetry to Avoid Calculations . Taylor series expansion of a vector-valued function h(x) about a point x 0 to rst-order in x = x x 0: h(x) = h(x 0 + x) = h(x 0) + @h(x 0) @x x + h.o.t. a column vector in Rdwhose ith coordinate is @f @x i Taylor expansion involving vectors. Basic form. Find the multivariate Taylor series expansion by specifying both the vector of variables and the vector of values defining the expansion point. For any x 2[a;b] f0(x) is the point, if there is one, for which lim t!x f(t) f(x) t x f0(x) = 0 If f = (f 1;:::;f n) with each f 1 a real valued function then Note that the Hessian matrix of a function can be obtained as the Jacobian matrix of the gradient vector of : (419) The second term is more complicated, though, because it's obviously quadratic in . 1. Relation to the Legendre Expansion in Griffiths . Find the multivariate Taylor series expansion by specifying both the vector of variables and the vector of values defining the expansion point. Taylor polynomials are incredibly powerful for approximations and analysis.Help fund future projects: https://www.patreon.com/3blue1brownAn equally valuable . in classical electrodynamics. in one-dimensional models of the tokamak edge.
The following definition is the matrix generalization of (2, 2) and (2, 3). Therefore, in complete analogy with the polynomial matrix function (2, 2) we can define the matrix function via a Taylor series. . Last Post; Oct 16, 2009; Replies 5 Views 2K. Since we assumed that <math>n</math> is a continuous variable anyway, we could immediately do a Taylor expansion of <math>p(n+1,\vec{r}-\vec{s})</math> around <math>p(n,r)</math> treating n as just another independent variable in the expansion. 2.1 Arc length and tangent vector. hide. (4.6), the second-order Taylor's expansion for cos x at the point x* = 0 is given as (b) EXAMPLE 4.9 Linear Taylor's Expansion of a Function syms x y f = y*exp (x - 1) - x*log (y); T = taylor (f, [x y], [1 1], 'Order' ,3) T =. Let's compute the Taylor series for sin (x) at point a = 0. There really isn't all that much to do here for this problem. Given a function f: Rm!Rn, its derivative df(x) is the Jacobian matrix.For every x2Rm, we can use the matrix df(x) and a vector v2Rm to get D vf(x) = df(x)v2Rm.For xed v, this de nes a map x2Rm!df(x)v2Rn, like the original f. The crudest approximation was just a constant. Theorem: Suppose \(f:\real^d . This is the rst two terms in the Taylor expansion of f about the point x0. Convergence of a Taylor series of a function to its values on a neighborhood of a point is equivalent to analyticity on that neighborhood. The mean vector and covariance matrices that represent the noisy speech statistics are computed as First-order Vector Taylor Series expansion (VTS-1): In the case of the rst-order Taylor series expansion of the resulting distribution of z is also Gaussian when x is Gaussian. Notes: This paper presents the Taylor-series expansion solution of near-wall velocity and temperature for a compressible Navier-Stokes-Fourier system with a no-slip curved boundary surface . That is, we set h = x a and g(t) = f(a+ t(x a)) = f(a+ th): 2 This can be generalized to the multivariate case. Find the multivariate Taylor series expansion by specifying both the vector of variables and the vector of values defining the expansion point. Now the term representing the change becomes the vector ~x ~a = (x a,y b)T. The gradient . Expression (2.2) decomposes f (x) into two parts, the approximation of the derivative and the truncation error. Second term in the series =. Suppose f : Rn!R is of class Ck on a convex open set S. We can derive a Taylor expansion for f(x) about a point a 2Sby looking at the restriction of fto the line joining a and x. 1. Compute the second-order Taylor polynomial of \(f(x,y,z) = xy^2e^{z^2}\) at the point \(\mathbf a = (1,1,1)\). Apply the Taylor series expansion formula: For better understanding of the series lets calculate each term individually for first few terms. . If you want more accuracy, you keep more terms in the Taylor series. Suppose now that the Taylor series of the scalar function is convergent for expansion point : ( )() ! Taylor first order expansion for multivariable function using total derivative. Jackson says that we gonna expand ( x ) around x' = x., but the expansion of ( x ) should also contain the first order derivative of ( x ) like , also the taylor expansion of the second term should contain 2 at the denominator but it's 6, and how the last term of O ( a 2) So what I am thinking is that Taylor expansion . The above Taylor series expansion is given for a real values function f (x) where . We are working with cosine and want the Taylor series about x = 0 x = 0 and so we can use the Taylor series . (2(|r-d|) ) the denominator evaluates to 2|r| for d=0; the derivative of a dot product is a vector (more specifically a one-row-matrix): ( |r-d| )' = -2 (r-d) T which evaluates . Each successive term will have a larger exponent or higher degree than the preceding term. This makes perfect sense because its coefficient in the Taylor expansion of exp(x + 2*y + 3*z) is 0.5, and in the Taylor expansion of exp(3*x + 2*y . As far as I know, Taylor expansion works with fixed function, in my case, I am going to have feature transformation . where is a vector and and are respectively the gradient vector and the Hessian matrix (first and second order derivatives in single variable case) of the . The mean vector and covariance matrices that represent the noisy speech statistics are computed as First-order Vector Taylor Series expansion (VTS-1): In the case of the rst-order Taylor series expansion of the resulting distribution of z is also Gaussian when x is Gaussian. I'm familiar with taylor series in one or more variables, but I'm confused on what to do. F(t0 +t) F(t0) The next better approximation included a correction that is linear in t. The first tern would be =. syms x y f = y*exp (x - 1) - x*log (y); T = taylor (f, [x y], [1 1], 'Order' ,3) T =. Taylor series and linearisation. In this paper, using the partial derivative of a matrix with respect to a vector and the . To calculate dl at 0 of the exponential function to order 5 . Show activity on this post. Taylor expansion is one of the many mathematical tools that is applied in Mechanics and Engineering. The Delta Method gives a technique for doing this and is based on using a Taylor series approxi-mation. (x a)k: When I do this in Mathematica, the output gives me terms like v ( 1. u) rather than simplifying this . This function is expensive to . We let ~x = (x,y) and ~a = (a,b) be the point we are expanding f(~x) about. But yes, the first-order term is the Jacobian, can be interpreted as a matrix operation, etc. The higher Taylor series are very nice when viewed through the lens of tensor calculus. A modification to the calculation in Witten is also possible. . In particular, by keeping one additional term, we get what is called a \second-order approximation". A partial sum of a series expansion can be used to approximate a . I think it can be expanded as a vector form of taylor series as ( r + l ) = ( r ) + l .
of the direction vector ~u = hcos . ( x a) 3 + . Solution. . Euler's Method, Taylor Series Method, Runge Kutta Methods, Multi-Step Methods and Stability. (A I)j: Our starting point is the more general Taylor series expansion in terms of Fr echet derivatives, obtained by Al-Mohy and Higham [2 . REVIEW: We start with the dierential equation dy(t) dt = f (t,y(t)) (1.1) y(0) = y0 This equation can be nonlinear, or even a system of nonlinear equations (in which case y is a vector and f is a vector of n dierent functions). The calculator can calculate Taylor expansion of common functions. Note that the Hessian matrix of a function can be obtained as the Jacobian matrix of the gradient vector of : (419) Approximating first-order derivatives Now, consider Taylor's expansion up to order two, f(x + h) = f(x) + hf (x) + h2 2 f () with [x, x + h], from which we get (2.2) f (x) = f ( x + h) f ( x) h h 2f (). Another application of linear fields, which will require development of the Lie-Taylor expansion for the momentum conservation equation along the lines of the mass conservation equation in 2b, is to partially ionized plasma where mass and momentum sources allow outflow boundary conditions, e.g. Let V be an open subset of M, such that for any x2X, the line segment joining xto plies in V. This line segment has the natural parametrization x+t(p x), where t2[0;1]. Remainder term for Taylor polynomials The Taylor series theorems found in Higham's monograph [9] primarily in-volve expanding f(A) about a multiple of the identity matrix, I: f(A) = X1 j=0 f(j)( ) j! This is called the kth-order Taylor approximation of fat x. Section 4-16 : Taylor Series. Answer (1 of 2): Let me start by stating Taylor' Theorem for a single variable. 2.1. Wolfram|Alpha can compute Taylor, Maclaurin, Laurent, Puiseux and other series expansions. I need to non-linearly expand on each pixel value from 1 dim pixel vector with taylor series expansion of specific non-linear function (e^x or log(x) or log(1+e^x)), but my current implementation is not right to me at least based on taylor series concepts.The basic intuition behind is taking pixel array as input neurons for a CNN model where each pixel should be non-linearly expanded with . The derivative of sin (x) = cos (x) ( r ) +.. in analogy with general taylor series expansion of f ( x a) = f ( a) + x f ( a) + x 2 2! The Taylor expansion of the ith component is: (416) The first two terms of these components can be written in vector form: (417) where is the . syms x y f = y*exp (x - 1) - x*log (y); T = taylor (f, [x y], [1 1], 'Order' ,3) T =. I have seen this formula in a book.
The Taylor series is a method for re-expressing functions as polynomial series. I've got a real-valued function of several vectors f ( u, v, w) formed by taking scalar products of linear combinations of the vectors, I want to Taylor expand around small v by writing. Exact forms of Taylor expansion for vector-valued functions have been incorrectly used in many statistical publications. Taylor's Theorem for smooth functions Let pbe a point of a real a ne space M of nite dimension, whose tangent space at any point is the real vector space X of nite dimension. Taylor-expansion algorithm The most common integration algorithm used in the study of biomolecules is due to Verlet [11]. How to do a taylor expansion in a small parameter ( d << r) SOLVED! Taylor series is the polynomial or a function of an infinite sum of terms. For example, the rst-order Taylor approximation of a function f: Rd!R that's differentiable at x2Rdis given by f(x+ x) f(x) + xTrfj x: Here rfj xis the gradient of fat x, i.e. Starting point is a m-dimensional vector-valued function , where the input is also a n-dimensional vector: . Definition 2.2. Here is the Taylor expansion for a vector valued function. Taylor expansion; Fourier series; Vector algebra; Vector Calculus; Multiple integrals; Divergence theorem; Green's theorem Stokes' theorem; First order equations and linear second order differential equations with constant coefficients; Matrices and determinants; Algebra of complex numbers; Mechanics and General Properties of Matter . Vector Multivariable Advanced Specialized Miscellaneous v t e In mathematics, the Taylor series of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. You can evaluate a function at 0. Electrostatics-Taylor expansion. Last Post; Oct 24, 2006; Replies 8 Views 8K. I wanted to compute Taylor series expansion of each pixel values from the pixel array. The in- and output of this function are numpy vectors. Context: (in case it helps) Im trying to study stability of Lagrangian points in a 3 body problem: Muphrid 834 2 The analogy for Taylor expansions of vector fields is most easily seen through directional derivatives. ( x a) + f ( a) 2! The corresponding weights are given by the derivatives of the function f ( x) that is approximated. I am wondering how Taylor expansion is going to approximate each pixel values with certain approximation order. I'm not sure this question even deserves to be posted on this great forum, but I've been stuck the past 2 hours on a relatively easy thing. If you specify the expansion point as a scalar a, taylor transforms that scalar into a vector of the same length as . 1.2 The Taylor Series De nition: If a function g(x) has derivatives of order r, that is g(r)(x) = dr dxr g(x) exists, then for any constant a, the Taylor polynomial of order rabout ais T r(x) = Xr k=0 g(k)(a) k! If we're really slick, we can save the first coefficients for these polynomials in a vector, call them say ., and then we can evaluate some approximation of f by summing up the first k terms . changes depending on \(n\) (scalar, vector, matrix, etc.). save. But I can't be sure about it. scalar-valued functions for simplicity; the generalization to vector-valued functions is straight-forward. ( x a) 2 + f ( a) 3! calculus functions derivatives partial-differential-equations differential-equations sequences vector-field taylor-series parametric-equation integrals limits taylor-expansion calculus-2 vector-calculus second-order-differential-equations taylor-polynomial polar-coordinates calculus-1 calculus-3 multiple-integrals Taylor's theorem and its remainder can be expressed in several different forms depending the assumptions one is willing to make. A field with domain R is sometimes referred to as a field on R. Define the gradient of a field using the Taylor expansion of the field, assuming such an expansion exists. A multipole expansion provides a set of parameters that characterize the potential due to a charge distribution of finite size at large distances from that distribution. We offer two methods to corre We use cookies to enhance your experience on our website.By continuing to use our website, you are agreeing to our use of cookies. Vector, point, and tensor fields are defined analogously, that is, for example, a vector field v has the vector value v(x) at x. In this module, we will derive the formal expression for the univariate Taylor series and discuss some important consequences of . Vector Multivariable Advanced Specialized Miscellaneous v t e In calculus, Taylor's theorem gives an approximation of a k -times differentiable function around a given point by a polynomial of degree k, called the k th-order Taylor polynomial. I need basically one dimension higher than that. Taylor's Theorem Di erentiation of Vector-Valued Functions Di erentiation of Vector Valued Functions De nition (5.16) Let f be de ned on [a;b] taking values in Rn. Questions of this type involve using your knowledge of one variable Taylor polynomials to compute a higher order Taylor .
We see how to do a Taylor expansion of a function of several variables, and particularly for a vector-valued function of several variables. You will also need to compute a higher order Taylor polynomial \(P_{\mathbf a, k}\) of a function at a point. You can take a derivative, Then, we can compute the Taylor series expansion of f about 0 in the usual way, and so on. Pandas how to find column contains a certain value Recommended way to install multiple Python versions on Ubuntu 20.04 Build super fast web scraper with Python x100 than BeautifulSoup How to convert a SQL query result to a Pandas DataFrame in Python How to write a Pandas DataFrame to a .csv file in Python The goal Let us consider a segment of a parametric curve between two points ( ) and ( ) as shown in Fig. (2.3) . I don't know how to derive this myself. 0. Home Calculators Forum Magazines Search Members Membership Login L. Taylor expansion . The Verlet integrator is based on two Taylor expansions, a forward expansion (t + At) and a backward expansion (t At),. 1 comment. Taylor expansion. f ( a) + f ( a) 1! The Verlet algorithm is not self-starting. Taylor expansion; Fourier series; Vector algebra; Vector Calculus; Multiple integrals; Divergence theorem; Green's theorem Stokes' theorem; First order equations and linear second order differential equations with constant coefficients; Matrices and determinants; Algebra of complex numbers; Mechanics and General Properties of Matter . The theorem for several variables is built upon the case for a single variable. If you specify the expansion point as a scalar a, taylor transforms that scalar into a vector of the same . Last Post; Apr 15, 2009; Replies 1 Views 5K. Taylor series expansion of exponential functions and the combinations of exponential functions and logarithmic functions or trigonometric functions. Usual function Taylor expansion. Theorem: Suppose \(f:\real^d . Basic form. Taylor Expansions in 2d In your rst year Calculus course you developed a family of formulae for approximating a function F(t) for tnear any xed point t0. I think of an expansion with non-projected vectors such as. =0 . Taylor series expansion of f (x)about x =a: Note that for the same function f (x); its Taylor series expansion about x =b; f (x)= X1 n=0 dn (xb) n if a 6= b; is completely dierent fromthe Taylorseries expansionabout x =a: Generally speaking, the interval of convergence for the representing Taylor series may be dierent from the domain of . Taylor's series expansions in three dimensions . The Taylor series expansion is a widely used method for approximating a complicated function by a polynomial. Abstract.
Its length can be approximated by a chord length , and by means of a Taylor expansion we have. Rotations are taught poorly in analytic geometry, but are very nice when viewed through the lens of linear algebra.
Ken Kreutz-Delgado (UC San Diego) ECE 275A November 1, 2013 10 / 25 The Taylor approximation won't be accurate, but often that doesn't matter. vector-valued function of a vector w which is di eren-tiable at a point w 0. LINEAR ALGEBRA AND VECTOR ANALYSIS MATH 22A Unit 17: Taylor approximation Lecture 17.1. $\begingroup$ Multivariable Taylor series is needed to prove second derivative test (at least second order Taylor expansion). For most common functions, the function and the sum of its Taylor series are equal near this point. Multivariable taylor polynomial of a transformation. + higher order terms. What is the vector form of Taylor series expansion? For example, to calculate Taylor expansion at 0 of the cosine function to order 4, simply enter taylor_series_expansion ( cos ( x); x; 0; 4) after calculation, the result is returned. share. (2.1) to the first order approximation. Of course I could do the expansion by hand an enter the result in Maple but I think it would be a very nice feature because an expansion of vector fields which vary in space and time is such a common problem e.g. changes depending on \(n\) (scalar, vector, matrix, etc.).
rewrite the above Taylor series expansion for f(x,y) in vector form and then it should be straightforward to see the result if f is a function of more than two variables. Last Time Using Ampere's Law (no name) B 0 B da S 0 Ampere's law: B 0 J B d 0 I enc Key to using Ampere's law: Your answer's only as good as your assumptions so be Taylor's Expansion of a Function of One Variable Approximate f ( x) = cos x around the point x* = 0. Copy Code. Derivatives of the function f ( x) are given as (a) Therefore, using Eq.
Last Post; Apr 4, 2013; Replies 1 Views 2K. Use one of the Taylor Series derived in the notes to determine the Taylor Series for f (x) =cos(4x) f ( x) = cos. ( 4 x) about x = 0 x = 0. How to do a Taylor expansion of a vector-valued function. Taylor's Theorem implies that fcan be approximated around w 0 as follows: f ( w) = 0) +J yw . Thinking about it, might have this licked, but please don't hesitate to post something if you know the answer offhand.
If you specify the expansion point as a scalar a, taylor transforms that scalar into a vector of the same length as . (13)] is often used to initiate the propagation. f ( u, v, w) = A + B + O ( 2) for small real .
The new mean vector is computed as In a similar fashion, the new . Taylor's theorem and its remainder can be expressed in several different forms depending the assumptions one is willing to make. This approach is the rational behind the use of simple linear approximations to complicated functions. Related Threads on Bloch vector from taylor expansion Taylor expansion of a vector function. polynomial to coefficient vector representation, and then use the polyval command to evaluate the polynomial, as follo ws: Taylor Series Expansion for Some Basic Functions The following is a list of Taylor/Maclaurin/power series expansions (at = r) for several frequently encountered analytic functions. 5.4.3 Multipole Expansion of the Vector Potential 7.1.1-7.1.3 Ohm's Law & Emf HW7 Announcements: Test in 2 weeks! This gives the same result as above .
( Taylor expansion) The Taylor expansion takes the function f ( x) and approximates it with a constant function. F(t0 + t) F(t0) +F(t0)t A series expansion is a representation of a mathematical expression in terms of one of the variables, often using the derivative of the expression to compute successive terms in the series. D.1 Directional derivative, Taylor series D.1.1 Gradients Gradient of a dierentiable real function f(x): RKR with respect to its vector domain is dened f(x) = f(x) x1 f(x) x.2.. f(x) xK RK (1354) while the second-order gradient of the twice dierentiable real function with
The Taylor expansion of the ith component is: (416) The first two terms of these components can be written in vector form: (417) where is the . It then expands this approximation further and further by adding more basis functions from the power series.
A lower order Taylor expansion [e.g., Eq.
Protection And Security In Operating System Notes
2006 Mazda Miata Limited Edition For Sale
Family Life Cycle In A Sentence
Cognitive Models Examples
Cuup International Shipping
Popov Brothers Badminton
Fishy Smelling Watery Diarrhea Toddler
Best Military Hoodies
Nascar Camping World Truck Series Drivers 2022
Halo Infinite Code Generator
|
CommonCrawl
|
View all Publication Data
$\cos\theta_{1}^{r}\cos\theta_{2}^{r}$ from Sirunyan, Albert M et al.
Hide Publication Information
Measurement of the top quark polarization and $\mathrm{t\bar{t}}$ spin correlations using dilepton final states in proton-proton collisions at $\sqrt{s}=$ 13 TeV
https://www.hepdata.net/record/90640
No Journal Information
The CMS collaboration
INSPIREhttp://inspirehep.net/record/1742786
Abstract (data abstract)
Unfolded normalized differential cross sections for angular observables sensitive to the top quark polarization and $t\bar{t}$ spin correlations measured in dileptonic $t\bar{t}$ events at CMS, and the corresponding extracted coefficients and values of $f_\text{SM}$. Statistical and systematic covariance matrices are given for all measured bins (or coefficients) together, in the same order as the distributions are given on this page. Care must be taken in the choice of distributions for simultaneous fitting. When using a simple $\chi^2$ minimization procedure, the statistical uncertainty in the fitted parameter can be reduced by adding to the fit an observable which has little or no sensitivity to the fitted parameter, but which is not independent of the other observables in the fit. This problem can be avoided by requiring the set of fitted observables to be mutually independent. For example, $\cos\theta_{1}^{k}$ and $\cos\theta_{1}^{k*}$ should not be used together, because $|\cos\theta_{1}^{k}|=|\cos\theta_{1}^{k*}|$.
\cos\theta_{1}^{k} \cos\theta_{2}^{k} \cos\theta_{1}^{r} \cos\theta_{2}^{r} \cos\theta_{1}^{n} \cos\theta_{2}^{n} \cos\theta_{1}^{k*} \cos\theta_{2}^{k*} \cos\theta_{1}^{r*} \cos\theta_{2}^{r*} \cos\theta_{1}^{k}\cos\theta_{2}^{k} \cos\theta_{1}^{r}\cos\theta_{2}^{r} \cos\theta_{1}^{n}\cos\theta_{2}^{n} c_{1}^{r}c_{2}^{k}+c_{1}^{k}c_{2}^{r} c_{1}^{r}c_{2}^{k}-c_{1}^{k}c_{2}^{r} c_{1}^{n}c_{2}^{r}+c_{1}^{r}c_{2}^{n} c_{1}^{n}c_{2}^{r}-c_{1}^{r}c_{2}^{n} c_{1}^{n}c_{2}^{k}+c_{1}^{k}c_{2}^{n} c_{1}^{n}c_{2}^{k}-c_{1}^{k}c_{2}^{n} \cos\varphi \cos\varphi_{\mathrm{lab}} |\Delta\phi_{\ell\ell}| Statistical covariance matrix Systematic covariance matrix Coefficients Statistical covariance matrix for coefficients Systematic covariance matrix for coefficients Sums and differences of B coefficients f_\text{SM}
Please try again later, or email [email protected] if this error persists.
Resource File
Additional Publication Resources
About HEPData Submitting to HEPData HEPData Coordinators HEPData Terms of Use
Email Twitter GitHub
Copyright ~1975-Present, HEPData | Powered by Invenio, funded by STFC, hosted and originally developed at CERN, supported and further developed at IPPP Durham.
|
CommonCrawl
|
Door Thomas van Kaathoven op do, 18/01/2018 - 09:56
Johannes Diderik van der Waals was born on the 23rd of November in 1839. He was, without a doubt, one of the most influential Dutch physicists. He was of great importance for the blossoming of the sciences in our country and contributed a lot to the regard of the Netherlands in the sciences. J.D. was originally educated to be a teacher. In 1866 he started as teacher at the H.B.S. in The Hague, later he became the director. He was studying physics and mathematics at Leiden university while still working at the H.B.S.. In 1873 he gained his PhD with the dissertation titled: "Over de continuïteit van gas- en vloestoftoestand" (on the continuity of the gaseous and liquid state). In this Dissertation he first published his now well-known law:
$$\left( p + \frac{a}{V_m^2} \right)\left( V_m -b\right)=RT$$
being a correction of the ideal gas law, by giving the gas its own volume and assuming a force between individual molecules. This force is now known as the "Van der Waals force".
In 1877 J.D. becomes the first professor in physics at the recently to university promoted Athenaeum "Illustre" in Amsterdam (Which later became the University of Amsterdam). From 1875 to 1895 he was a member of the Royal Dutch Academy of Arts and Sciences. At the age of 71 in 1908 he retired as professor. To honor him a memorial with his most memorable contributions to physics was revealed in the physics laboratory. On the 18th of November of the same year he, together with Kamerling Onnes, received the gold medal of the "Genootschap ter bevordering van natuur-, genees- en heelkunde" in Amsterdam. This was a big honor, as these medals are only awarded very rarely. They also were the first physicists to receive this honorary award.
Van der Waals earned a lot of awards during his life. He is, for example, one of the twelve foreign members of the Academie des Sciences in Paris. In 1910 he got the Nobel prize in physics for his astonishing work on gas and liquid phase equations. He became the third Dutch physicists to win this prize. He was only preceded by Zeeman and Lorentz.
He died (at the age of 85) on the 8th of March 1923. Professor Went, chairman of the Royal Dutch Academy of Arts and Sciences, wrote the following about his death: "In Van der Waals, the Netherlands has lost one of his greatest sons, one of the most noble practitioners of the sciences".
It has become clear why we as study association for Physics full of pride and with permission of his heirs use his name. Or as was noted during the founding of this association: "Let us be a close group, bonded by the Van der Waals forces".
|
CommonCrawl
|
Environmental Systems Research
Spectral methods for imputation of missing air quality data
Shai Moshenberg1,
Uri Lerner1 &
Barak Fishbain1
Environmental Systems Research volume 4, Article number: 26 (2015) Cite this article
Air quality is well recognized as a contributing factor for various physical phenomena and as a public health risk factor. Consequently, there is a need for an accurate way to measure the level of exposure to various pollutants. Longitudinal continuous monitoring however, is often incomplete due to measurement errors, hardware problems or insufficient sampling frequency. In this paper we introduce the discrete sampling theorem for the task of imputing missing data in longitudinal air-quality time series. Within the context of the discrete sampling theorem, two spectral schemes for filling missing values are presented—a Discrete Cosine Transform (DCT) and Clustering Single Variable Decomposition (K-SVD) based methods.
The evaluation of the suggested methods in terms of accuracy and robustness showed that the spectral methods are comparable to the state of the art when the data is missing at random and do have the upper hand when data is missing in big chunks. The accuracy was evaluated using a complete very long air pollutants time series. Previous studies used incomplete shorter series, altering the results. The robustness of the imputation method was evaluated by examining its performance with increasing portions of missing data.
Spectral methods are a great option for air quality data imputation, which should be considered especially when the missing data patterns are unknown.
Air quality has a profound effect on our physical and economic health (Künzli et al. 2000; Kampa and Castanas 2008; Laumbach and Kipen 2012). Air pollution is originated either from natural phenomenon or from anthropogenic activity (Cullis and Hirschler 1980; Robinson and Robbins 1970). Regardless of its sources, air pollution undergoes a set of chemical processes in the atmosphere, depending on initial concentration and ambient conditions. The large number of sources and the complexity of the chemical processes lead to the creation of complex scenarios with highly variable spatial and temporal pollution patterns. Thus, the analysis of air-pollution and its effects is a challenging task (Nazaroff and Alvarez-Cohen 2001; Levy et al. 2014; Moltchanov et al. 2015; Lerner et al. 2015).
One of the primary tools to assess air-pollution patterns is through continuous monitoring of pollutants ambient levels. To accomplish this, numerous physicochemical methods have been developed and Air quality monitoring (AQM) station networks have been deployed all around the world. However, any longitudinal data acquisitioning system suffers from economical constraints, measurement errors, routine downtime due to maintenance and technical malfunctions, which result in missing data points. Data can be missing in long chunks due to a critical failure or in short intervals due to, for example, calibration or a temporary power outage. To cope with this inherent problem, many imputation methods have been proposed. The length of the missing interval and the kind of study conducted, are important in determining the best method for interpolation.
Regardless of the method used for assigning the missing values, one can compute one value per missing sample, i.e., single imputation, or a few, which are drawn from a prior distribution—multiple imputation (Little and Rubin 2002). The latter has shown promising results in surveys (Rubin 2004; Su et al. 2011), where the distribution to draw the imputed values from is known or can be assessed from the available samples. Air pollution time series are time variant with rapid, sometimes large changes, which would limit the use of multiple imputation methods. Therefore the focus here is on single imputation methods.
Physical imputation models estimate missing values by utilizing environmental conditions and air quality measurements acquired in other sites before, at and after the fact, and measurements acquired at the same location before and after the fact (Hopke 1991). This approach works if the physical laws governing the different phenomena are well known and relatively simple. However, generally, the nature of the entire physical and chemical processes which govern the observed phenomenon, are either unknown or too complex to be described by an analytical model, thereby rendering the physical model approach unsuitable.
Data driven models, typically, do not assume any physical regime governing the observed phenomenon. These methods fill data gaps by using patterns and relations that are observed in the available data (Solomatine et al. 2008). Data driven methods are either based on a single variable or on multi-variable imputation. Single-variable methods estimate missing values through available measurements of the same environmental variable (e.g., NO2, CO or O3). Prominent single variable methods are replacing missing values with the available samples' mean, nearest neighbor (NN), linear interpolation and spline (Junninen et al. 2004). Multi-variable imputation techniques calculate missing samples using data of more than one variable, exploiting relationships between different variables that manifest themselves in the data [e.g., NO2 versus O3 presence (Lee et al. 2002; Haagen-Smit et al. 1953)]. All the aforementioned methods, however, are local methods either in space or in time; meaning missing data is recovered by using data from preceding and succeeding available samples (i.e., locality in time) or adjacent stations (i.e., locality in space). Thus, these methods are mostly effective for cases with a relatively low number of missing data points, they are easy to compute but quickly become less accurate as the amount of missing data increases.
Spectral representation of a signal refers to its analysis with respect to frequency, rather than time (Hamilton 1994). Frequency representation of a signal correspond to how much of the signal lies within each given frequency band over a range of frequencies. A signal can be converted between the time and frequency domains through transformations that project the signal onto a set of basis-functions which differ in their change rates, i.e., frequencies. The Fourier transform (Bracewell 1965), for example, projects the time series onto a set of sine waves of different frequencies, each of which represents a frequency component. Similarly, the cosine transform projects the signal on a set of cosine functions oscillating at different frequencies. As the AQM acquired air-pollution time-series, which are inherently discrete, the discrete forms of these transformations—the Discrete Fourier Transform (DFT) (Bracewell 1965) and the Discrete Cosine Transform (DCT) (Rao et al. 1990) can be used. The justification of signals' spectra for analysis is twofold. First, spectral methods are global, i.e. they use the complete signal for computation, not just local extremes, similar sub-sequences or areas near the missing data. Second, as atmospheric composition changes over a finite length of time, ambient pollutants levels and meteorological variables (i.e., temperature and wind) can be viewed as a data signal with a low rate of change. Hence, the signal can be represented by a small number of coefficients that correspond to the low frequencies (Varotsos et al. 2005; Marr and Harley 2002; Chellali et al. 2010).
A formal mathematical framework for recovering missing signal's samples in the frequency domain, the discrete sampling theorem, was presented by Yaroslavsky et. al (2009). The discrete sampling theorem states the terms and conditions a band-limited frequency representation of a signal with missing samples must fulfill so the signal can be fully recovered, given it is narrow banded in any spectral domain. The theorem constitutes the new data imputation scheme presented here. Within its context two spectral signal representations are considered: The DCT (Rao et al. 1990; Yaroslavsky et al. 2009) and the sparse coding K-Cluster Single Variable Decomposition (K-SVD) (Aharon et al. 2006). The application of the suggested methods show that they are comparable to the state-of-the-art when imputing short missing sequences and do hold the upper hand when larger chunks of subsequent data are missing.
Prior art
Several mathematical models have been suggested for air-pollution data imputation (Junninen et al. 2004; Plaia and Bondi 2006; Schneider 2001). These methods include local methods, such as Nearest Neighbor (NN), mean, linear interpolation, spline and Expectation Maximization (EM). All these methods are thoroughly described in the literature and are recapitulated here for the sake of completeness.
Simple local methods for data imputation such as NN, Mean and Linear Interpolation were shown to be effective, especially when signal's average levels are estimated (Junninen et al. 2004). NN fills missing samples using the value of its nearest known neighbor. Linear Interpolation infers the missing values based on a weighted average of the neighboring known samples based on the temporal distance. Mean interpolation replaces missing values with the average of the set of known samples within a temporal window around the missing sample.
A computational intense local imputation method is the Spline interpolation. Spline describes the signal between the available samples through a set of continuous functions. It can be thought of setting a rope through the available k known samples (nodes). The signal is broken into k-1 segments, each represented by a third degree polynomial function:
$$f_{i} ( {x_{i} } ) = a_{i} x_{i}^{3} + b_{i} x_{i}^{2} + c_{i} x_{i} + d_{i}$$
For the k−1 segments, Spline will set k−1 piecewise functions, composing a total of 4(k−1) unknown parameters—\(\left\{ {\left[ {a_{i} ,b_{i} ,c_{i} ,d_{i} } \right]_{{i \in \left[ {1,k - 1} \right]}} } \right\}\). In order to compute these 4(k−1) unknowns, it is imposed that the first and second derivatives are equal at each node:
$$\begin{aligned} \frac{{df_{i} \left( x \right)}}{dx} = \frac{{df_{i - 1} \left( x \right)}}{dx} \hfill \\ \frac{{d^{2} f_{i} (x_{i} )}}{{d^{2} x}} = \frac{{d^{2} f_{i - 1} \left( x \right)}}{{d^{2} x}} \hfill \\ \end{aligned}$$
This results in \(4 \cdot \left( {k - 1} \right) + 2\) equations, i.e., 2 equations more than the number of unknowns. To deal with the extra 2 equations, the second derivatives at the end points are set to 0.
The advantage of this method is that while being relatively simple to calculate, the smooth function achieved with continuity in the first and second derivatives, better describes the changing nature of physical phenomenon over time. This method was shown to work well with short intervals of missing data points (Junninen et al. 2004).
Expectation Maximization (EM) algorithm is often used for filling in missing data using available data from the entire time series (Junninen et al. 2004; Dempster et al. 1977). The main assumption is that the missing data has linear relation with available data. To exploit that the data is split into a set of equal length vectors, \(\left\{ {d_{(k)} } \right\} \in D\), e.g., daily, weekly or monthly sequees. Then the missing samples are assigned with an initial guess of the missing values (i.e., zeros or, for each vector, the average of its available samples). The missing data points in one vector are computed by a linear combination of the vectors with non-missing corresponding data points. The covariance between the vectors is used as a way to determine how dominant a particular vector will be in the proposed linear combination.
Formally, let A be a matrix of G × P data points, where G is the number of time periods evaluated (e.g., a week or a day) and P is the number of records per the above time period (e.g., samples per week or day). Let \(\left\{ a \right\} \subseteq A\) be the set of available data and \(\left\{ m \right\} \subseteq A\) be the set of missing samples. For a given column c, let \(\left\{ {a_{a}^{c} } \right\}\) and \(\left\{ {a_{m}^{c} } \right\}\) be the sets of available and missing data in c respectively and μ c is the mean value of \(\left\{ {a_{a}^{c} } \right\}\). Finally, \(A\backslash c\) is matrix A excluding column c. Using the notation above, missing values of A are estimated through the following linear regression model:
$$\left\{ {a_{m}^{c} } \right\} =\upmu_{c} + (\left\{ a \right\}_{a}^{A\backslash c} -\upmu_{A\backslash c} ){\text{B }} + {\text{e}}$$
e is the residual matrix assumed to have a zero mean and B is the matrix of the regression parameters to be calculated using the covariance matrix, Σ:
$$B = \Sigma_{aa}^{ - 1} \cdot \Sigma_{am}$$
where, Σaa denotes the sub-convergence matrix of columns of the available values with the columns of the available values. Σam denotes the sub-convergence matrix of columns of the available values with the columns of the missing values.
Applying Eq. 3 results in filling the missing data. Having the data in hand, a new mean and covariance matrix are calculated. Using the new B and Σ the process is repeated for all originally missing samples. The process is repeated until convergence.
The Regulated EM algorithm (Smith et al. 2003) presents a slight modification in the EM method—the sub-convergence matrix \(\Sigma_{aa}^{ - 1}\) is replaced by the following equation:
$$\Sigma_{aa}^{ - 1} \leftarrow \left( {\Sigma_{aa} + h^{2} D} \right)^{ - 1}$$
where D is the diagonal of Σaa and h is a scalar regulation parameter. This modification ensures that the matrix is positive definite, invertible and converges faster, while artificially makes the variance of each vector more dominant with respect to its covariance with the rest of the columns.
The EM method may lead to better results especially if the missing segments are part of a recurring pattern. But if the pattern is not recurring in a set rhythm, this method may not work. Further, this is an iterative method which will lead to a greater computational costs compared to the local methods described above.
All the aforementioned methods for air quality missing data imputation have been well documented. However, all these methods are not sufficiently accurate when longer segments of data are missing or in the event that the relationship between the data segments is not linear. Spectral methods consider the entire signal in their evaluation of missing data, which in return present better results when large chucks of data are missing.
A quasi-spectral method for air-quality data imputation, which uses information from the air monitoring stations array is Site-Dependent Effect Method (SDEM) (Plaia and Bondi 2006). The SDEM assumes that there are similarities in air quality sequences throughout the week, as well as between a given day of the week e.g. Sunday or Monday, and given hour of day. The missing data is then imputed by taking the mean value of all the non-missing measurements from the other stations at the missing time point, and modifying it based on the week, day and hour effect of the given station. This method is similar to spectral methods in the way it utilizes intuitive cycles i.e. hours, days and weeks, but it misses less obvious cycles, that may contain a lot of information such as local-specific phenomena of limited regions. In addition, all the weights in this method are arbitrarily set and it stands to reason that some cycles have a more profound effect than others and thus different weight values may produce more accurate results.
For evaluating the imputation methods and their suitability for different scenarios and loss patterns, one must use a complete dataset and impose different data loss patterns and portions. The data was acquired from a standard AQM station, maintained by the Haifa District Municipal Association for Environmental Protection (HDMAE).Footnote 1 The station is situated on the roof of the HDMAE headquarters building, located at the center of the Haifa Bay industrial- commercial area. The station is ~12 m above ground level and reports every 30 min the average temperature, wind speed and direction, PM2.5 and PM10 levels, O3, NOx, NO2, SO2, and CO. In this study two long complete time series of SO2 and NO2 were used. SO2 data was acquired using a pulsed fluorescence analyzer, over 167 days from Decmber 31st, 2006 until July 18th, 2006—a total of 8016 half hour average samples. NO2 levels were recorded using a chemiluminescence analyzer from January 27th, 2008 to June 4th, 2008—a total of 138 days with 6240 samples. For the EM computation the data was divided into 24 h sequences, each consists of 48 measurements constructing 167 SO2 sequences and 138 NO2 sequences.
Previous data imputation studies (Junninen et al. 2004; Plaia and Bondi 2006; Schneider 2001) used shorter time series with gaps of missing samples in the original data. To cope with that, the gaps in the original time series were imputed as a preprocessing phase. After filling these gaps, a deliberate omission of data was executed and the omitted data was recovered. The error between the values of omitted and recovered samples was reported. Thus, the imputed data in the preprocessing phase was regarded as ground truth. All imputation methods that do not relay on physical models must base their missing data estimates on the signals' characteristics and behavior. Working with time series that has imputed data, alter these characteristics and thus hamper the results. In this study, working with a complete longitudinal datasets has mitigated these biases.
Epidemiologic and exposure studies on health effects of air pollution look at long term chronic and short term acute health implications (Lebowitz 1996). Chronic studies look at the long term effects of pollutants. These studies mainly focus on cumulative exposure and less on sudden increase in the concentration of any one pollutant. Acute effects, on the other hand, are transient and are a result of time variant exposure (Peng and Dominici 2008). The evaluation criteria for data imputation must account for the different nature of these two classes of studies. When chronic exposure is sought, the reconstruction mean error, typically through the mean squared error (MSE), should be the performance measure. For acute exposure assessment studies, one should evaluate the maximum error in signal's values and behavior. As an assessment how well the reconstructed signal presents the original data's behavior, the difference in the second statistical moments of the original and reconstructed signals are computed. For practicality sake, the runtimes are also reported.
In order to evaluate imputation performance, two data omission mechanisms were used. The first mechanism is omission of data at completely random locations, i.e., data Missing Completely At Random (MCAR) (Little and Rubin 2002). This type of behavior was found to characterize air quality data standard AQM stations (Junninen et al. 2004; Rubin 1976). The second samples removal mechanism removes a single chunk of data starting at a completely random time period.
For both data omission mechanisms a series of tests were carried out, where at each test, an increasing portion of data was omitted. The data portion that was omitted ranged from one sample to 99 % of the entire dataset. At each test the number of omitted samples was increased by 1. This set of tests extends previous studies, which evaluated the methods for small sets (i.e., small number of tests) all limited to small portions of missing samples [3, 10 and 25 %—(Schneider 2001; Plaia and Bondi 2006; Junninen et al. 2004) respectively]. The location of the omitted samples is chosen at random at each run. In order to mitigate a possible bias due to a specific location of the omitted data, for each number of omitted samples, five random tests were carried out, so at each test different random samples were chosen. The average performance indicator's value over the five runs is reported.
In all cases and scenarios tested here, the spline method is never the method of choice. Hence, for all examined cases, the spline method was shown to be inferior. In some performance criteria, such as MSE, maximum error and standard deviation differences for batch omission, the error for spline is between five to seven orders of magnitudes larger than the rest of the methods, making it hard to be put on the same graph. Therefore, the spline results are omitted from Fig. 1 through Fig. 4.
Five runs averaged MSE as a function of number of omitted data samples reconstructing the signal with the following imputation methods NN (blue), Linear Interpolation (green), substituting missing data with the Mean (cyan), EM (purple), DCT (yellow) and K-SVD (black). The color codes are detailed in a panel. The missing data was omitted either in random (a) or in batches (b). a NO2 (random locations). b SO2 (continuous batch)
Figure 1 depicts the reconstruction MSE for increasing portions of missing data for the NO2 (Fig. 1a) and SO2 (Fig. 1b) time series. The error for each portion of missing data is computed over five runs, for the two omission mechanisms described above—randomly scattered (Fig. 1a) and batch (Fig. 1b). When data is omitted at random, the local methods perform best; namely the Spline in a case of a very low corruption rate and linear interpolation as the rate of corruption increases. When the data is missing in segments, the EM, Discrete Cosine Transform (DCT) and K-SVD algorithms preform the best. In studies centered on chronic effects of air pollution, if the data is missing in short intervals, the linear interpolation is the best method for filling in the missing data. But in the event of a long missing sequence, the K-SVD should be the method of choice.
Figure 2, presents the maximum difference between the original signal and the reconstructed one. It can be seen that for randomly omitted data (Fig. 2a) the K-SVD and the DCT methods have the smallest deviation with a slight advantage for the K-SVD over the DCT. Consequentially, studies conducted in order to investigate the acute effect of air pollution should fill in missing data with the K-SVD method. When the data is missing in segments (Fig. 2b), for all methods the error is in the same magnitude of the signal. Therefore such studies should not use time series with long temporal windows missing.
Five runs averaged max difference as a function of number of omitted data samples reconstructing the signal with the following imputation methods NN (blue), Linear Interpolation (green), substituting missing data with the Mean (cyan), EM (purple), DCT (yellow) and K-SVD (black). The color codes are detailed in a panel. The missing data was omitted either in random (a) or in batches (b). a NO2 (random locations). b SO2 (continuous batch)
Figure 3 presents the difference in the standard deviation between the reconstructed and the original signal. For both random and batch omitted data the DCT, K-SVD and EM are at par, outperforming the local methods. The standard deviation difference between the original signal and the reconstructed one is smallest using the DCT method if the data is missing at random. The EM method is best if a long sequence is missing. Overall the DCT method reconstructs the missing data in a way that is more similar to the original signal in terms of STD compared to all the other methods.
The computation times of the various methods are presented in Fig. 4. As expected, the spectral methods i.e. K-SVD for signal recovery from sparse dimension, DCT and EM are much more costly in terms of computation times. In both omission mechanisms, the computation time decreases as the portion of missing data increases. Even though the spectral methods are more costly in terms of computation, one should note that the data being processed describes months' worth of data, when the computation times are in the order of minutes. Therefore the longer computation times should not prevent one form using these spectral methods for data imputation.
In this paper two spectral methods for data imputation, originating from the discrete sampling theorem, are introduced to air quality time series with missing data. The methods are thoroughly evaluated with respect to the common practice and the state of the art missing data recovery methods. The evaluation of the methods here is much more comprehensive than previous studies, as it uses much longer air quality time series with no missing data, under significantly larger number missing data scenarios.
The evaluation results are summarized in Fig. 5 and Table 1 (for randomly omitted data) and Fig. 6 and Table 2 (for chunks removal). Figures 5a and 6a depict the average MSE for NO2 (randomly omitted data) and SO2 (chunks removal) time series. The average is computed over three sets of test runs. The first set, low signal degradation due to missing data, are all the tests with 1 sample missing up to 33 % of missing samples. This set is dubbed low and is marked in blue (Low-blue). The second set is all runs with 33 % samples omitted up to 66 % (Mid-green). The last set of runs present 66–99 % samples missing, i.e., high degradation (High-Yellow). For data missing at random, Fig. 5a and Table 1 (MSE), the simple imputation methods provide the best MSE results. However, the spectral methods do not fall far behind. For data missing in chunks, Fig. 6a and Table 2 (MSE), the spectral methods, DCT and K-SVD, present the best performance MSE-wise.
Results summary—NO2 data missing at random for low signal degradation, i.e. small portion of missing data, mid range signal degradation and high signal degradation. The color codes are detailed in panels within each image. a MSE, b Max difference, c STD difference, and d execution time
Table 1 Results summary—NO2 data missing at random
Results summary—SO2 data missing in batches for low signal degradation, i.e. small portion of missing data, mid range signal degradation and high signal degradation. The color codes are detailed in panels within each image. a MSE, b Max difference, c STD difference, and d execution time
Table 2 Results summary—SO2 data missing in batches
Figure 5b and Table 1 (Diff) and Fig. 6b with Table 2 (Diff) present the max difference. For randomly missing data, the spectral methods, i.e., K-SVD and DCT, have the upper hand. For missing chunks, the error produced by all methods is so large, that none of them are recommended for this case. Figure 5c, Table 1 (STD), Fig. 6c and Table 2 (STD) detail the difference in the second moment of the original and reconstructed signals. For both random and chunks missing data patterns, the K-SVD and DCT present the best results.
While the spectral methods do present higher computational times (Fig. 5d, Table 1 (Time), Fig. 6d and Table 2 (Time)), these times are still feasible. Moreover, the spectral methods have the upper hand, MSE-wise, when the data is missing in chunks and when evaluating acute exposure, i.e., max-difference and signal behavior through its standard deviation. In the cases where the simple methods prevail, the spectral methods do not fall far behind. Therefore, we conclude that the spectral methods in general, K-SVD and DCT in particular, do present viable tool for data imputation and should be used as the tool of choice in general as it presents the overall best performance.
Both the KSVD (Aharon et al. 2006) and DCT (Yaroslavsky et al. 2009) methods assume band-limited signal, i.e., only a small portion of signal's spectral representation coefficients are non-zero. In most implementations the portion of non-zero coefficients is predetermined. Choosing larger portions of non-zero coefficient would result in longer execution times and may jeopardize the convergence of both the DCT and KSVD imputation methods. Smaller portions of non-zero coefficients decrease computation times and mitigate the risk of not converging, but may increase the output error. Therefore, for using these methods, one should carefully assess what is the correct portion of non-zero coefficients in the signals' spectra.
The discrete sampling theorem
Next we outline the discrete sampling theorem, which constitute the data imputation scheme presented here.
Let a(t) be a continuous signal and \(\left\{ {A^{N} } \right\}\) the set of N measurements of the signal. These N samples constitute a uniform sampling grid and are acquired is such way that if all these N-samples are known, following the Nyquist–Shannon sampling theorem, (Unser 2000) they are sufficient for representing the continuous signal. Let \(\left\{ {A^{K} } \right\}\) be the set of K (out of N) available data points taken at irregular positions of the signal regular sampling grid. Due to data loss \(K < N\). Note that the missing samples are \(\left\{ {A^{N} } \right\}\) excluding \(\left\{ {A^{K} } \right\}\), \(\left\{ {A^{M} } \right\} = \left\{ {A^{N} } \right\}\backslash \left\{ {A^{K} } \right\}\). The goal then, is to generate out of this incomplete set of \(K\) samples, a complete set of \(N\) signal samples that secures the most accurate, in a certain metrics—typically \(L_{2}\), approximation. The discrete sampling theorem (Yaroslavsky et al. 2009) states the terms and conditions a signal must fulfills in the transform domain so its \(\left\{ {A^{N} } \right\}\) samples can be recovered from the available \(\left\{ {A^{K} } \right\}\) samples:
Theorem 1
The Discrete Sampling Theorem (Yaroslavsky et al. 2009)—Any discrete signal of \(N\) samples defined by its \(K \le N\) sparse and not necessarily regularly arranged samples, and is known to have only \(K \le N\) non-zero transform coefficients for certain transform \(\varPhi_{N}\) (i.e., \(\varPhi_{N}\) - transform "band-limited" signal) can be fully recovered from exactly \(K\) of its samples provided positions of the samples secure the existence an inverse transform matrix, \(\left\{ {\varPhi_{K\,of\,N} } \right\}^{ - 1}\) , where \(\varPhi_{K\,of\,N}\) consists of K rows of the transform matrix \(\varPhi_{N}\) that correspond to the K samples positions. If the signal has more than K non-zero transform coefficients, the recovery process guarantees minimum reconstruction error.
Theorem 1 implies that selecting a transform that features the best energy compaction with the smallest number of transform coefficients secures the best approximation of \(\left\{ {A^{N} } \right\}\) for a given subset \(\left\{ {A^{K} } \right\}\) of its samples. The recovery process is based on the following simple iterative procedure (Yaroslavsky et al. 2009 ):
Algorithm 1. Discrete sampling theorem imputation algorithm.
The mean square error of this algorithm is calculated by:
$$MSE = A_{i}^{N} - \widehat{{\mathop {A_{i}^{N} }\limits^{{}} }}_{2}^{2} = \varSigma_{j \notin R} \left| {\widehat{{\mathop {\alpha_{i}^{N} }\limits^{{}} }}(j)} \right|^{2}$$
The transform of choice here is the discrete cosine transform (DCT), given by:
$$\alpha^{N} \left( k \right) = \Sigma_{j = 0}^{{{\text{N}} - 1}} A^{N} \left( j \right)\cos \left[ {\frac{\pi }{N}\left( {j + 0.5} \right)k} \right] k = 0, \ldots N - 1$$
DCT is a widely used function in the field of image processing and data compression because of its tendency to concentrate most of the energy from a signal in a narrow band (Wang et al. 2000).
Sparse coding data imputation
Sparse coding is an emerging spectral approach to data analysis (Elad 2010). While classical spectral methods utilize predetermined set of basis-functions (e.g., DFT, DCT) for representing the signal, sparse sensing methods, compute the set of basis-functions, which results in the sparsest representation of the signal, i.e., most coefficients of the signal's representation are zeros. The set of basis-functions is referred to as dictionary, where each element in the dictionary is an atom. The process of computing the basis-functions is called dictionary learning (Kreutz-Delgado et al. 2003). K-SVD (Aharon et al. 2006) is a dictionary learning algorithm for creating a set of basis functions for sparse representations. K-SVD is a generalization of the k-means clustering method, and it works by iteratively alternating between sparse coding of the input data (based on the current dictionary), and updating the atoms in the dictionary to better fit the data.
The discrete sampling theorem suggests that the smaller the number of non-zero coefficients in the domain transform, the better the reconstruction is. Yet, the transform of choice, DCT, is known to have good energy compaction in general but it is not costumed nor guaranteed for the specific data in hand and may or may not yield a sparse representation (Elad 2010). To cope with this problem, building a custom transform or dictionary for sparse coding is suggested (Elad 2010; Aharon et al. 2006). The dictionary is an overcomplete matrix, \(D \in {\mathbb{R}}^{N \times P}\), that consists of P atoms (with a length of N) and is designed so a signal \(A^{N}\) can be then represented by a sparse linear combination of these atoms. For finding D the K-cluster Single Value Decomposition (K-SVD) method (Aharon et al. 2006) is employed (for details see Section S1 in the Additional file 1), where the matrix \(A^{N}\) is utilized as the training set for the process.
Having the dictionary in hand, a sparse representation for \(A^{N}\) is sought. Given D, the dictionary, or basis functions, the sparse representation, \(x^{N}\), aims at minimizing the error between the original signal \(A^{N}\) and the estimate of the signal using the basis function set, D:
$$\mathop {\hbox{min} }\limits_{x} \left\{ {\left\| {A^{N} Dx^{N} } \right\|_{F}^{2} } \right\} \quad S.T\;\left\| {x^{N} } \right\| _{0} < K$$
where \(\left\| \cdot \right.\left\| {_{0} } \right.\) is the \(\ell^{0}\)-norm, counting the non-zero elements in a vector and \(\left\| {B\left\| {_{F} } \right.} \right.\) is the Frobenius Norm: \(\left\| {B\left\| {_{F} } \right.} \right. = \sqrt {\mathop \sum \limits_{ij} \left( {B_{ij} } \right)^{2} }\).
Equation 8 was shown to be NP-hard, i.e., no efficient methodology is known for finding \(x^{N}\). The approximation of the sparse representation can be obtained through the matching pursuit (MP) algorithm (Mallat and Zhang 1993), or through the K-SVD algorithm, which produces \(x^{N}\) as a byproduct. Note, that \(x^{N}\) is guaranteed to be both sparse (i.e., band limited) and zero in all coefficients outside the band limited area, R. Thus, unlike the DCT solution, there is no need to assume beforehand which coefficients are outside R and no need to zero them. Having both K-SVD and MP algorithms in hand, Algorithm 1 becomes:
Algorithm 2. K-SVD imputation algorithm
The data omission, imputation and evaluation codes were written as Matlab scripts (Matlab R2012b) and are available for academic purposes from http://fishbain.net.technion.ac.il.
http://www.envihaifa.org.il/eng.
Aharon M, Elad M, Bruckstein A (2006) K-SVD: an algorithm for designing overcomplete dictionaries for sparse representation. Signal Process IEEE Trans 54(11):4311–4322
Bracewell RN (1965) The Fourier transform and its applications. Mcgraw-Hill, New-York
Chellali F, Khellaf A, Belouchrani A (2010) Wavelet spectral analysis of the temperature and wind speed data at Adrar, Algeria. Renewable Energy 35(6):1214–1219
Cullis C, Hirschler M (1980) Atmospheric sulphur: natural and man-made sources. Atmos Environ 14(11):1263–1278
Dempster A, Laird N, Rubin D (1977) Maximum likelihood from incomplete data via the EM algorithm. J Roy Stat Soc 39(1):1–38
Elad M (2010) Sparse and redundant representations: from theory to applications in signal and image processing. Springer Science, Haifa
Hamilton JD (1994) Time series analysis. Princeton University Press, Princeton
Haagen-Smit A, Bradley C, Fox M (1953) Ozone formation in photochemical oxidation of organic substances. Ind Eng Chem 45(9):2086–2089
Hopke P (1991) Receptor modeling for air quality management, vol 7. Elsevier, Amsterdam, The Netherlands
Junninen H et al (2004) Methods for imputation of missing values in air quality data sets. Atmos Environ 38(18):2895–2907
Kampa M, Castanas E (2008) Human health effects of air pollution. Environ Pollut 151(2):362–367
Kreutz-Delgado K et al (2003) Dictionary learning algorithms for sparse representation. Neural Comput 15(2):349–396
Künzli N et al (2000) Public-health impact of outdoor and traffic-related air pollution: a European assessment. Lancet 356(9232):795–801
Laumbach RJ, Kipen HM (2012) Respiratory health effects of air pollution: update on biomass smoke and traffic pollution. J Allergy Clin Immunol 129(1):3–12
Lebowitz M (1996) Epidemiological studies of the respiratory effects of air pollution. Euro Respir J 9(5):1029–1054
Little R, Rubin D (2002) Bayes and multiple imputation. In: Statistical analysis with missing data, 2nd edn. Wiley, Hoboken, New Jersey, pp 200–222
Lee K, Xue J, Geyh A, Ozkaynak H, Leaderer B, Weschler C, Spengler J (2002) Nitrous acid, nitrogen dioxide, and ozone concentrations in residential environments. Environ Health Perspect 110(2):145
Lerner U, Yacobi T, Levy I, Moltchanov S, Cole-Hunter T, Fishbain B (2015) The effect of egomotion on environmental monitoring. Sci Total Environ 533:8-16
Levy I, Mihele C, Lu G, Narayan J, Brook JR (2014) Evaluating multipollutant exposure and urban air quality: pollutant interrelationships, neighborhood variability, and nitrogen dioxide as a proxy pollutant. Environ Health Perspect 122(1):65-72
Marr L, Harley R (2002) Spectral analysis of weekday–weekend differences in ambient ozone, nitrogen oxide, and non-methane hydrocarbon time series in California. Atmos Environ 36(14):2327–2335
Mallat S, Zhang Z (1993) Matching pursuits with time-frequency dictionaries. Signal Process IEEE Transa 41(12):3397–3415
Nazaroff W, Alvarez-Cohen L (2001) Environmental Engineering Science. John Wiley, New-York
Peng RD, Dominici F (2008) Statistical methods for environmental epidemiology with r: a case study in air pollution and health. Springer, Berlin
Plaia A, Bondi A (2006) Single imputation method of missing values in environmental pollution data sets. Atmos Environ 40(38):7316–7330
Rao KR, Yip P, Ramamohan Rao K (1990) Discrete cosine transform: algorithms, advantages, applications. Academic Press, Boston
Robinson E, Robbins RC (1970) Gaseous nitrogen compound pollutants from urban and natural sources. J Air Pollut Control Assoc 20(5):303–306
Rubin D (1976) Inference and missing data. Biometrika 65:581–592
Rubin D (2004) Multiple imputation for nonresponse in surveys, vol 81. Wiley, Hoboken
Schneider T (2001) Analysis of incomplete climate data : estimation of Mean Values and Covariance Matrices and Imputation of Missing Values. Am Meteorol Soc 14:853–871
Smith R, Kolenikov S, Cox L (2003) Spatiotemporal modeling of PM2.5 data with missing values. J Geophys Res 108(D24):11-1–11-10
Solomatine D, See LM, Abrahart RJ (2008) Data-driven modelling: concepts, approaches and experiences. In: Practical hydroinformatics. Springer, Berlin, Heidelberg, pp 17–30
Su Y-S, Gelman A, Hill J, Yajima M (2011) Multiple Imputation with Diagnostics (mi) in R: opening windows into the black box. J Stat Softw 45(2):1–31
Unser M (2000) Sampling 50 years after Shannon. Proc IEEE 88(4):569–587
Varotsos C, Ondov J, Efstathiou M (2005) Scaling properties of air pollution in Athens, Greece and Baltimore, Maryland. Atmos Environ 39(22):4041-4047
Wang Y, Vilermo M, Yaroslavsky L (2000) Energy compaction property of the MDCT in comparison with other transforms. Los-Angeles, CA
Moltchanov S, Levy I, Etzion Y, Lerner U, Broday DM, Fishbain B (2015) On the feasibility of measuring urban air pollution by wireless distributed sensor networks. Sci Total Environ 502:537–547
Yaroslavsky LP, Shabat G, Salomon BG, Ideses IA, Fishbain B (2009) Nonuniform sampling, image recovery from sparse data and the discrete sampling theorem. JOSA A 26(3):566-575
SM developed and implemented the methods, conducted the numerical experimentation, and led the drafting of the manuscript. UL designed the study, analyzed the results and took part in drafting the manuscript. BF advised and directed the presented research as well as contributed to drafting of the manuscript. All authors read and approved the final manuscript.
This work was partially supported by the 7th European Framework Program (FP7) ENV.2012.6.5-1, Grant Agreement No. 308524 (CITI-SENSE), the Technion Center of Excellence in Exposure Science and Environ- mental Health (TCEEH), the New-York Metropolitan Research Fund, and the Environmental Health Foundation (EHF). The contribution to this paper of Michael Elad's course on sparse and redundant representations and Elad's advises are acknowledged.
The Technion Center of Excellence in Exposure Science and Environmental Health (TCEEH), Faculty of Civil and Environmental Engineering, Technion-Israel Institute of Technology, Haifa, 3200003, Israel
Shai Moshenberg, Uri Lerner & Barak Fishbain
Shai Moshenberg
Uri Lerner
Barak Fishbain
Correspondence to Barak Fishbain.
K-SVD Algorithm.
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
Moshenberg, S., Lerner, U. & Fishbain, B. Spectral methods for imputation of missing air quality data. Environ Syst Res 4, 26 (2015). https://doi.org/10.1186/s40068-015-0052-z
DOI: https://doi.org/10.1186/s40068-015-0052-z
Univariate
Imputing
Spectral methods
Discrete sampling theorem
Sparse coding
K-SVD
|
CommonCrawl
|
Illustrating the Net Present Value
Time Value of Your Money
Calculating Future and Present Value
Net Present Value Calculations
Technical Analysis Basic Education
Time Value of Money: Determining Your Future Worth
Daniel Myers
Daniel Myers oversees client portfolios, investment policy at Kreger Financial and is the owner of Deep Fork LLC, a real estate investment firm.
Charlene Rhinehart
Reviewed by Charlene Rhinehart
Charlene Rhinehart is a CPA , CFE, chair of an Illinois CPA Society committee, and has a degree in accounting and finance from DePaul University.
Learn about our Financial Review Board
Pete Rathburn
Fact checked by Pete Rathburn
Pete Rathburn is a copy editor and fact-checker with expertise in economics and personal finance and over twenty years of experience in the classroom.
If you were offered $100 today or $100 a year from now, which would you choose? Would you rather have $100,000 today or $1,000 a month for the rest of your life?
Net present value (NPV) provides a simple way to answer these types of financial questions. This calculation compares the money received in the future to an amount of money received today while accounting for time and interest. It's based on the principle of time value of money (TVM), which explains how time affects the monetary worth of things.
The TVM calculation may sound complicated, but with some understanding of NPV and how the calculation works—along with its basic variations, present value, and future value—we can start putting this formula to use in common application.
A Rationale for the Time Value of Money
If you were offered $100 today or $100 a year from now, which would be the better option and why?
This question is the classic method in which the TVM concept is taught in virtually every business school in America. The majority of people asked this question choose to take the money today. And they'd be right, according to TVM, which holds that money available at the present time is worth more than the identical sum in the future. But why? What are the advantages and, more importantly, the disadvantages of this decision?
There are three basic reasons to support the TVM theory. First, a dollar can be invested and earn interest over time, giving it potential earning power. Also, money is subject to inflation, eating away at the spending power of the currency over time, making it worth a lesser amount in the future.
Finally, there is always the risk of not actually receiving the dollar in the future, whereas, if you hold the dollar now, there is no risk of this happening (as the old bird-in-the-hand-is-better-than-two-in-the-bush saying goes). Getting an accurate estimate of this last risk isn't easy and, therefore, it's harder to use in a precise manner.
Would you rather have $100,000 today or $1,000 a month for the rest of your life?
Most people have some vague idea of which they'd take, but a net present value calculation can tell you precisely which is better, from a financial standpoint, assuming you know how long you will live and what rate of interest you'd earn if you took the $100,000.
Specific variations of the time value of money calculations are as follows:
Net present value lets you value a stream of future payments into one lump sum today, as you see in many lottery payouts.
Present value tells you the current worth of a future sum of money.
Future value gives you the future value of cash that you have now.
Say someone asks you, which would you prefer: $100,000 today or $120,000 a year from now? The $100,000 is the "present value" and the $120,000 is the "future value" of your money. In this case, if the interest rate used in the calculation is 20%, there is no difference between the two.
Determining the Time Value of Your Money
There are five factors in a TVM calculation. They are:
1. Number of time periods involved (months, years)
2. Annual interest rate (or discount rate, depending on the calculation)
3. Present value (what you currently have in your pocket)
4. Payments (If any exist; if not, payments equal zero.)
5. Future value (The dollar amount you will receive in the future. A standard mortgage will have a zero future value because it is paid off at the end of the term.)
Many people use a financial calculator to quickly solve TVM questions. By knowing how to use one, you could easily calculate a present sum of money into a future one, or vice versa. With four of the above five components in-hand, the financial calculator can easily determine the missing factor.
But you can also calculate future value (FV) and present value (PV) by hand. For future value, the formula is:
FV = PV × ( 1 + i ) n \text{FV}=\text{PV}\times\left(1+i\right)^n FV=PV×(1+i)n
For present value, the formula would be:
PV = FV / ( 1 + i ) n where: FV = Future value of money PV = Present value of money i = Interest rate n = Number of compounding periods per year \begin{aligned} &\text{PV}=\text{FV}/\left(1+i\right)^n\\ &\textbf{where:}\\ &\text{FV}=\text{Future value of money}\\ &\text{PV}=\text{Present value of money}\\ &\text{i}=\text{Interest rate}\\ &\text{n}=\text{Number of compounding periods per year}\\ \end{aligned} PV=FV/(1+i)nwhere:FV=Future value of moneyPV=Present value of moneyi=Interest raten=Number of compounding periods per year
Applying Net Present Value Calculations
Net present value calculations can also help you discover answers for financial queries like determining the payment on a mortgage, or how much interest is being charged on that short-term holiday expenses loan. By using a net present value calculation, you can find out how much you need to invest each month to achieve your goal. For example, in order to save $1 million to retire in 20 years, assuming an annual return of 12.2%, you must save $984 per month.
Below is a list of the most common areas in which people use net present value calculations to help them make financial decisions.
Savings for college
Home, auto, or other major purchases
Financial planning (both business and personal)
The net present value calculation and its variations are quick and easy ways to measure the effects of time and interest on a given sum of money, whether it is received now or in the future. The calculation is perfect for short- and- long-term planning, budgeting, or reference. When plotting out your financial future, keep these formulas in mind.
Simple vs. Compounding Interest: Definitions and Formulas
Understanding the Time Value of Money
Calculating Present and Future Value of Annuities
Corporate Finance Basics
Capital Budgeting: What It Is and How It Works
Time Value of Money and the Dollar
How Do I Use the Rule of 72 to Calculate Continuous Compounding?
Time Value of Money Explained with Formula and Examples
The time value of money (TVM) is the concept that a sum of money has greater value now than it will in the future due to its earnings potential.
Future Value: Definition, Formula, How to Calculate, Example, and Uses
Future value (FV) is the value of a current asset at a future date based on an assumed rate of growth over time.
Net Present Value (NPV): What It Means and Steps to Calculate It
Net present value (NPV) is the difference between the present value of cash inflows and the present value of cash outflows over a period of time.
Compounding Interest: Formulas and Examples
Compounding is the process in which an asset's earnings, from either capital gains or interest, are reinvested to generate additional earnings.
Present Value of an Annuity: Meaning, Formula, and Example
The present value of an annuity is the current value of future payments from that annuity, given a specified rate of return or discount rate.
Future Value of an Annuity: What Is It, Formula, and Calculation
The future value of an annuity is the total value of a series of recurring payments at a specified date in the future.
|
CommonCrawl
|
Is there an explicit relationship between the eigenvalues of a matrix and its derivative?
If we consider a matrix $A$ dependent of a variable $x$, the eigenvalues and eigenvectors satisfying the equation $$ A \vec{v}=\lambda \vec{v} $$
will also depend on $x$. If we consider the matrix $B$ such that $$B_{ij}=\frac{ \mathrm{d}}{ \mathrm{d} x} A_{ij}$$ Then, could we express the eigenvalues of $B$ in terms of the eigenvalues of $A$? I found the question very interesting and was not able to find a satisfying answer myself.
For example in the case for $2\times2$ matrices of the form $$ A=\left ( \begin{matrix} a(x) & b(x) \\ 0 & c(x) \end{matrix} \right ),\implies B=\left ( \begin{matrix} a'(x) & b'(x) \\ 0 & c'(x) \end{matrix} \right ) $$ I noticed that $\lambda_B(x)= \lambda_A'(x)$. But I cannot generalise it to general $2\times 2$ matrices. Not even thinking about $n\times n$ matrices...
Thank you for your help and any idea!
linear-algebra matrices derivatives eigenvalues-eigenvectors
MattMatt
$\begingroup$ One nice property of an upper (or lower) triangular matrix is that its eigenvalues are the same as its diagonal elements; which explains the $2\times 2$ examples that you discovered. But this is not a general property of all matrices. $\endgroup$ – greg Jan 27 at 20:51
It is not true in general that the eigenvalues of $B(x)$ are the derivatives of those of $A(x)$. And this even for some square matrices of dimension $2 \times 2$.
Consider the matrix
$$A(x) = \begin{pmatrix} 1& -x^2\\ -x &1 \end{pmatrix}$$ It's characteristic polynomial is $\chi_{A(x)}(t)=t^2-2t+1-x^3$, which has for roots $1\pm x ^{3/2}$ for $x>0$. Those are the eigenvalues of $A(x)$.
The derivative of $A(x)$ is $$B(x) = \begin{pmatrix} 0& -2x\\ -1 &0 \end{pmatrix}$$ and it's characteristic polynomial is $\chi_{B(x)}(t)=t^2-2x$, whose roots are $\pm \sqrt{2} x^{1/2}$for $x>0$.
We get a counterexample as the derivative of $1+x^{3/2}$ is not $\sqrt{2}x^{1/2}$.
However in the special case of upper triangular matrices (that you consider in your original question) the eigenvalues of the matrix derivative are indeed the derivatives of the eigenvalues.
mathcounterexamples.netmathcounterexamples.net
It can be shown that the eigenvalues of the derivative of the matrix cannot be derived from the eigenvalues of the original matrix. Example: $$ A_1 = \begin{pmatrix} -1 & 0 \\ 0 & 1 \end{pmatrix} \;\;\; , \;\;\; A_2 = \begin{pmatrix} 0 & e^x \\ e^{-x} & 0 \end{pmatrix} $$ Both of the matrices above have the eigenvalues $-1$ and $1.$ However, the derivative of the first matrix has the eigenvalue $0$ (with multiplicity 2), while the derivative of the second matrix has the eigenvalues $i$ and $-i.$
Just given the eigenvalues $-1$ and $1$, there is no way of telling which matrix they originate from, hence no way of getting the eigenvalues of the derivative.
Reinhard MeierReinhard Meier
Let $\{\alpha_k,\beta_k\}$ be the eigenvalues of $(A,B)$ where $B(x) = A'(x).$
A class of matrices for which $\beta_k=\alpha_k'\,$ can be constructed as follows.
Choose an orthogonal matrix $Q$ and an upper triangular matrix $U(x).$ $$\eqalign{ A &= Q\,U\,Q^{-1} \cr A' &= Q\,U'Q^{-1} \cr }$$ Since $Q$ is orthogonal, the eigenvalues of $(U,U')$ equal the eigenvalues of $(A,A')$, respectively.
Since the eigenvalues of a triangular matrix are its diagonal elements,
the EVs of $U$ are $\{U_{kk}=\alpha_k\}\,$ and the EVs of $U'$ are $\{U'_{kk}=\alpha_k'=\beta_k\}.$
NB: $\,Q$ must be independent of $x$ for this construction to apply.
edited Mar 31 at 20:47
greggreg
11k11 gold badge99 silver badges2727 bronze badges
If $A(x)v= \lambda(x)v$, with v independent of x, then, differentiating on both sides of the equation by x, $A'(x)v= \lambda'(x)v$. That is, the eigenvalues of A' are the derivatives of the eigenvalues of A, with same associated eigenvectors.
12.4k11 gold badge66 silver badges1717 bronze badges
$\begingroup$ The bias with what you say is that in general, the eigenvector depends on $x$. $\endgroup$ – mathcounterexamples.net Jan 25 at 20:49
Not the answer you're looking for? Browse other questions tagged linear-algebra matrices derivatives eigenvalues-eigenvectors or ask your own question.
Recovering a Matrix knowing its eigenvectors and eigenvalues
eigenvalue of block matrix in terms of original matrix
The relationship between diagonal entries and eigenvalues of a diagonalizable matrix
Antidiagonal block matrix (eigenvalues and eigenvectors)
Eigenvalues of block matrix where blocks are related
Gradient of a function involving maxima
Are the eigenvalues of the matrix AB equal to the eigenvalues of the matrix BA
Finding approximate common eigenvector of two matrices
Matrix transformations, Eigenvectors and Eigenvalues
Are the following equations concerning matrices and their eigenvectors equivalent?
|
CommonCrawl
|
What do we mean by spectrum?
Consider the context of image processing and computer vision, and, in particular, discrete Fourier transform.
For example, in the sentence
In the Discrete Time Fourier Transform the forward Fourier Transform correspond to a discrete function of a sequence $x[k]$, however the inverse transform still remains continuous:
$$x(t) = \frac{1}{2\pi} \int_{-\pi}^{\pi} \hat{x}(\omega) e^{j \omega t} d \omega, \; \forall t \in \mathbb{R}$$
the inverse Fourier Transform is typically limited to integration on $[−\pi, \pi]$ (with $T = 1)$ as frequencies outside of the interval just correspond to replicas of the original spectrum produced by the sampling procedure.
or in the sentence
To make the inverse transform treatable by modern digital computer, we need to discretize the spectrum of the signal as well
The word "spectrum" is used. I have seen it being used in several other places. However, I still do not get its meaning.
What is the spectrum? I think this is related to the concepts of time, frequency, spatial and spectral domains, but how exactly?
I have a little understanding of Fourier transform, but I haven't yet fully grasped the concept of discrete Fourier transform. Furthermore, my knowledge of signal processing, image processing and computer vision is very limited.
image-processing computer-vision terminology
nbro
nbronbro
Nah, it's simpler than that.
The spectrum is the result of the Fourier transform. It's also referred to as the frequency domain. The confusion arises from the common usage of the term (insert flavor here) Fourier Transform to refer to both the operation (the transform) and the results (spectrum).
As for the replication question, I will refer you to Significance of modular arithmetic in DFT?
Cedron DawgCedron Dawg
Not the answer you're looking for? Browse other questions tagged image-processing computer-vision terminology or ask your own question.
Significance of modular arithmetic in DFT?
What does double hat mean in Spectral theory?
Image processing and the Fourier Transform
Verb for subtracting the mean?
What does "warp" mean in DSP?
Looking for Open Source Image Processing Library that is equivalent to HIPS
What does 'canonical' mean?
Phase correlation vs. normalized cross-correlation
Determine surface homogeneity (luminance, colour, shape) by fourier transform
What is the relation between time, frequency, spatial and spectral domains?
What is a periodic signal in image processing?
|
CommonCrawl
|
Constant current in a circuit?
A battery pumps electrons by creating an electric field and converting electric potential energy to kinetic. Near the positive terminal the electrons have more kinetic energy so shouldn't the current be higher?
An analogy might clarify my question: If you drop a ball from a building, the ball will speed up as it reaches the ground because more potential energy has been converted to kinetic energy. Similarly, shouldn't the electrons move faster as they approach the positive terminal, since they have more kinetic energy? And consequently shouldn't the current be higher?
electromagnetism physics
\$\begingroup\$ your analogy is completely off. Reality is more like Newton's cradle. In your analogy, there is only 1 molecule (a ball). In reality, as soon as you drop an electron, it will simply bump into another electron. \$\endgroup\$ – hassan789 Feb 9 '14 at 1:31
In electrical circuits velocity is pretty much the current (Coulombs/sec). Kinetic energy is proportional to velocity-squared (1/2*m*v^2), it means that if you have a constant current, you have on average a constant kinetic energy.
Therefore, since the entire wire is filled with electrons (virtually gapless); all the electrons must have the same velocity (same current), and so the kinetic energy is equal every where.
Analogy, where water molecules = electrons. You can see that the molecules at the start of the pump don't have a larger velocity (current).
Another weaker analogy is that a train. Imagine the engine (battery) as the mechanism that applies the force (voltage/emf) to the rest of the carts (electrons). All the carts in the train will have the same velocity.
hassan789hassan789
\$\begingroup\$ Kinetic energy is not proportional to velocity. Momentum is proportional to velocity (assuming Newtonian - non-relativistic- mechanics are at play) \$\endgroup\$ – Spehro Pefhany Feb 9 '14 at 1:57
\$\begingroup\$ It's proportional to \$v^2\$, not \$v\$. In the example I cited, that makes oh, about 8 orders of magnitude difference, and it's important to understanding why the kinetic energy of the electrons has negligible effect. \$\endgroup\$ – Spehro Pefhany Feb 9 '14 at 2:09
\$\begingroup\$ sorry, you are correct, V^2. However, assuming V is constant, so is the KE. \$\endgroup\$ – hassan789 Feb 9 '14 at 2:12
There are some good, theoretically sound answers here. Let me try to explain from a different viewpoint:
I tend to not think of electrons flowing through wires, as this implies that their mass and momentum is what's causing the transfer of power. You often hear that you should imagine a tube of ping-pong balls. But this can be misleading, too! Instead, imagine an 8-ft diameter pipe packed with sand. You force some sand in one end, and some will come out the other end, but velocity, mass, and momentum don't play into it much.
The energy transfer happens because of a wavefront of excited electrons pushing (via electric fields) all the other electrons around them. Not because of the electron mass imparting Newtonian momentum. The actual electron drift in a 1-mm thick copper conductor is on the order of one millimeter per second!
In fact, that's one of the big places that the water analogy breaks down. There is no electrical momentum based on mass! (That's a strong statement, and not absolutely correct, but it will serve you well)
If you want to "add" momentum into your circuit, you'll use an inductor. This makes the water analogy useful again :)
There's an excellent example of this analog. Check out this Youtube of a Ram Pump: http://www.youtube.com/watch?v=qWqDurunnK8. It's a neat, old technology that many people have never seen. It turns out that it's exactly the same as a boost converter! If you haven't seen boost converters yet, you soon will. They're used all over the place in electrical circuits.
The Ram Pump works based on momentum. To make it work in electronics, you use an inductor to impart a momentum analog! It's awesome! Use a diode for the one-way valve, and a capacitor for the pressure chamber.
You're embarking on a fun adventure, this whole engineering/physics thing :)
bitsmackbitsmack
Why is the current constant everywhere?
Well, it's not, really. Here's what's missing in your analogy: if the difference in gravitational potential from the top to the bottom of the building is analogous to the difference in electric potential (voltage) of the battery, and the ball represents an electric charge (say, an electron), what you are missing is all the other charge in the wire.
All conductors are full of movable electric charge, like a pipe full of water. If you put some charge in one end, you create a higher "pressure" at that end. Then a wave of force propagates through the fluid with the eventual outcome of equalizing the pressure everywhere. In water, these waves move at the speed of sound. In a wire, they move at the speed of light.1
Because these waves will eventually propagate throughout the entire circuit, if your battery voltage isn't changing, eventually it will reach equilibrium and the current will be the same everywhere. When the size of the circuit is small, light is so fast that it's a reasonable simplifying assumption that these waves propagate "instantly", and so the current is the same everywhere in the loop.
When this is not the case, and the time it takes the changes to propagate becomes significant, the circuit will likely be modeled with a transmission line and you are probably entering the discipline of RF engineering.
You should probably also not think about electrons moving from the negative terminal to the positive terminal. You will confuse yourself because everything will be backwards (because electrons are negative charge), and you will also be forgetting about roughly half the charge in the universe: protons, and other positive charge. Rarely is the motion of individual electrons relevant, and in many circuits (and certainly any circuit with a battery), electrons aren't the only charge carriers. Usually we care about the forces transmitted by charge carriers, not the charge carriers. See:
How is saltwater able to conduct electric charge between two wires?
Which everyday components involve flows of charge that are not electrons?
Current flow in batteries?
In your particular case, when the battery is first connected, electrons are attracted to the positive terminal, and repelled from the negative terminal. Current begins flowing at both terminals of the battery, and then the wave of force propagates through the wire until the current is flowing everywhere and the circuit reaches equilibrium.
You would also probably find this enlightening: How does the current know how much to flow, before having seen the resistor?
1: The speed of light in particular materials differs, just as the speed of sound does. See velocity factor and the very cool Cherenkov radiation, something like the light analog of a sonic boom.
Phil FrostPhil Frost
Kinetic energy from electron drift is minimal. We can see the effect of it in superconducting circuits, and at frequencies approaching daylight, where it appears as a kind of inductance, but it's not significant in ordinary circuits.
Electrons in a wire drift very slowly, meters per hour. That represents a substantial current because there are a lot of them.
Recall that current is charge flow (quantized as so much charge per electron) per unit time, nothing to do with kinetic energy, only how many electrons pass a given 'divider' per second.
Spehro PefhanySpehro Pefhany
\$\begingroup\$ A good way to visualize electron drift is to completely pack tight a large diameter tube with ping pong balls from top to bottom.. pack it tight until the balls are flush to the edges. If you push one more ball in at one end, one ball comes out at the other end. That's electron drift ! The tube full of ping pong balls is like the wire full of electrons. Putting a ball in at one end causes one to come out the other end. Even though the ball only moved 40mm (the diameter of a ping pong ball) work was done at the other end of the tube (the wire) \$\endgroup\$ – Brian Onn Feb 9 '14 at 1:10
\$\begingroup\$ some (not all) of the reasoning is wrong. For example, current in fact is directly related to energy. \$\endgroup\$ – hassan789 Feb 9 '14 at 1:58
Electrons moving in a wire are not like balls being dropped.
When you drop a ball from a building, it has not much stopping it until it hits the ground. There is only air in the way, which represents a very small influence on the ball over the conditions one might imagine in this thought experiment.
Electrical circuits aren't like that. The mass of the electrons compared to the mass of everything else (protons, neutrons) in the wire is very tiny. But more significantly, the wire is full of electrons. You can't "drop" an electron: it will just hit other electrons. Don't think of a ball: think of a sea of balls. The individual balls aren't really so relevant: usually what we care about is how we can exploit this invisible "fluid" to do work.
The circuit you have drawn, by the way, can't exist. In a schematic, the lines represent ideal "wires" that are infinitely conductive, which means the voltage is the same everywhere in them. There are a lot of ways to explain this, but here's one: take Ohm's law:
$$ V = IR $$
Our "infinitely conductive" ideal wire means "zero resistance". So:
$$ V = I \cdot 0 \Omega $$
Can voltage (\$V\$) be anything but zero volts?
The battery meanwhile maintains ideally a constant 9V between its terminals. If we call the potential at the positive terminal \$V_+\$ and the potential at the negative terminal \$V_-\$, then the battery introduces the constraint:
$$ V_+ - V_- = 9\mathrm V $$
The schematic wire connecting the terminals of the battery also shares the same terminals of the battery, and as above, the voltage across this wire must be 0V, by definition. So we have this system of equations:
$$ \begin{cases} V_+ - V_- = 9\mathrm V \\ V_+ - V_- = I \cdot 0\Omega \end{cases} $$
Is there any solution to this system of equations? There is not. This circuit can't exist.
If you attempt to build this circuit with a real wire, that wire will have some small resistance. Let's say it's \$1\Omega\$. Most short wires will be less, but this will keep the math easy. Now the equations are:
Now it's clear that the current will be 9A.
This should make your thought experiment more clear: in any real circuit, there must be some resistance1 between the battery terminals. If you want to make an analogy to more familiar physical phenomena, resistance is like a friction that acts on electric charge. This is where the energy from moving the charge from a high potential (positive terminal) to a lower potential (negative terminal) goes: it is converted to heat in the resistor.
1: superconductors have no resistance, but they do have inductance. Provided the battery can continue to supply energy, there is no limit to how high the current can become, but the current grows at a finite rate, so an infinite current would require an infinite energy source.
\$\begingroup\$ Thanks, but you didn't answer my question! Your answer made it clear that the circuit can't exist, and that the electrons are like fluid, but you didn't really address my question. Why doesn't the "fluid" speed up? \$\endgroup\$ – dfg Feb 9 '14 at 0:58
\$\begingroup\$ @dfg I'll try a different approach in another answer. Stand by... \$\endgroup\$ – Phil Frost Feb 9 '14 at 1:03
Not the answer you're looking for? Browse other questions tagged electromagnetism physics or ask your own question.
How does the current know how much to flow, before having seen the resistor?
If electrons move slowly in an electrical circuit then what signal or energy is it that travels at the speed of light?
What is the dielectric constant of a fly?
Why does the thickness of a wire affect resistance?
How does an inductor generate back emf equal to a source driving it?
Why isn't there a potential difference across a disconnected diode?
Loop count in a current transformer
What is the mechanism that causes two conductors to "stick together" when a current is passed?
How is it that two electric currents can travel in opposite directions on the same wire, at the same time, without interfering with each other?
Can anyone explain reverse bias pn junction using only electrons as charge carriers
How to analyze a capacitor connected to a DC voltage directly (no resistor)?
|
CommonCrawl
|
The Annals of Mathematical Statistics
Ann. Math. Statist.
Volume 27, Number 2 (1956), 513-520.
An Extension of the Kolmogorov Distribution
Jerome Blackman
More by Jerome Blackman
PDF File (612 KB)
Let $x_1, x_2, \cdots, x_n, x'_1, x'_2, \cdots, x'_{nk}$ be independent random variables with a common continuous distribution $F(x)$. Let $x_1, x_2, \cdots, x_n$ have the empiric distribution $F_n(x)$ and $x'_1, x'_2, \cdots, x'_{kn}$ have the empiric distribution $G_{nk}(x)$. The exact values of $P(-y < F_n(s) - G_{nk}(s) < x$ for all $s$) and $P(-y < F(s) - F_n(s) < x$ for all $s$) are obtained, as well as the first two terms of the asymptotic series for large $n$.
Ann. Math. Statist., Volume 27, Number 2 (1956), 513-520.
First available in Project Euclid: 28 April 2007
https://projecteuclid.org/euclid.aoms/1177728274
doi:10.1214/aoms/1177728274
MR82751
links.jstor.org
Blackman, Jerome. An Extension of the Kolmogorov Distribution. Ann. Math. Statist. 27 (1956), no. 2, 513--520. doi:10.1214/aoms/1177728274. https://projecteuclid.org/euclid.aoms/1177728274
See Correction: Jerome Blackman. Correction to "An Extension of the Kolmogorov Distribution". Ann. Math. Statist., Vol. 29, Iss. 1 (1958), 318--322.
Project Euclid: euclid.aoms/1177706737
Error Estimates for Certain Probability Limit Theorems
Shapiro, J. M., The Annals of Mathematical Statistics, 1955
A Test for Symmetry Using the Sample Distribution Function
Butler, Calvin C., The Annals of Mathematical Statistics, 1969
On the Maximum Deviation of the Sample Density
Woodroofe, Michael, The Annals of Mathematical Statistics, 1967
On a Minimal Essentially Complete Class of Experiments
Ehrenfeld, Sylvain, The Annals of Mathematical Statistics, 1966
A General Theorem With Applications on Exponentially Bounded Stopping Time, Without Moment Conditions
Wijsman, R. A., The Annals of Statistics, 1977
Estimation of a Probability Density Function and Its Derivatives
Schuster, Eugene F., The Annals of Mathematical Statistics, 1969
Conditional Expectations of Random Variables Without Expectations
Strauch, R. E., The Annals of Mathematical Statistics, 1965
One-Sided Confidence Contours for Probability Distribution Functions
Birnbaum, Z. W. and Tingey, Fred H., The Annals of Mathematical Statistics, 1951
Martingale Transform and Random Abel-Dini Series
Chen, Louis H. Y., The Annals of Probability, 1980
On the Approximation of a Distribution Function by an Empiric Distribution
Blackman, Jerome, The Annals of Mathematical Statistics, 1955
euclid.aoms/1177728274
|
CommonCrawl
|
Dyeing studies and fastness properties of brown naphtoquinone colorant extracted from Juglans regia L on natural protein fiber using different metal salt mordants
Mohd Nadeem Bukhari1,
Shahid-ul-Islam1,
Mohd Shabbir1,
Luqman Jameel Rather1,
Mohd Shahid1,
Urvashi Singh1,
Mohd Ali Khan2 &
Faqeer Mohammad1
In this study, wool fibers are dyed with a natural colorant extracted from walnut bark in presence and absence of mordants. The effect of aluminum sulfate, ferrous sulfate, and stannous chloride mordants on colorimetric and fastness properties of wool fibers was investigated. Juglone was identified as the main coloring component in walnut bark extract by UV visible and FTIR spectroscopic techniques. The results showed that pretreatment with metallic mordants substantially improved the colorimetric and fastness properties of wool fibers dyed with walnut bark extract. Ferrous sulfate and stannous chloride mordanted wool fibers shows best results than potassium aluminum sulfate mordanted and unmordanted wool fibers. This is ascribed due to strong chelating power of ferrous sulfate and stannous chloride mordants.
Synthetic colorants in view of cheaper price, wide range of colors, and considerable improved fastness properties are extensively used in textile industries for dyeing of different textile materials (El-Nagar et al. 2005; Islam and Mohammad 2014). However, recent research have shown that some of the azo and benzidine based synthetic dyes produces some toxic, allergic, and carcinogenic secondary degradation byproducts (aromatic amines). In response to that, many European countries have imposed ban on their use (Ali et al. 2013; Jothi 2008; Bechtold et al. 2003). Enhanced environmental awareness has motivated researchers to reintroduce natural colorants from natural sources like plants (stem, bark, leaves, roots, and flowers), animals, and minerals (Samanta and Agarwal 2009; Shahid et al. 2012; Shahid et al. 2013). In addition to their biodegradability and compatibility with the environment, natural colorants have been recently discovered to exhibit other functional properties, such as antimicrobial activity (Khan et al. 2012; Yusuf et al. 2015), insect repellent (Ali et al. 2013), fluorescence (Rather et al. 2015), UV protection (Grifoni et al. 2009; Sun and Tang 2011), and deodorizing (Lee et al. 2009). Therefore, natural colorants are among potential candidates for developing green textile dyeing process and serving as better alternatives or copartners to toxic synthetic colorants (Islam et al. 2014).
Despite their several advantages, there are some drawbacks associated with natural colorants, such as low exhaustion of dyes and poor fastness of dyed fabrics (Micheal et al. 2003). To overcome these problems attempts have been made which have mainly focused on the use of metallic salts as mordants (Khan et al. 2006). Metal salt mordants form complexes with dye molecules on one side and with the functional groups of textile substrate on the other side resulting in improved fastness properties or exhaustion as well as producing wide range of shades with the same dye molecule (Shahid et al. 2013; Cristea and Vilarem 2006). Many efforts have been undertaken all over the world and are currently underway for the identification and isolation of natural dyes from different plant species for their use in coloration as well as in functional finishing of textiles (Hwang et al. 1998; Lee and Kim 2004).
Juglans regia L. commonly known as walnut is one such dye bearing plant commonly found in temperate regions. It is cultivated commercially in Asia, western South America, United States, and Central and Southern Europe (Siva 2007). The parts of this tree like leaves, husk, and shell have been tested as potential dyeing materials for different textile substrates (Shaukat et al. 2009). Apart from textile dyeing, the parts of this tree are found to be medicinally very useful such as depurative, antihelmintic, laxative, and detergent, astringent and diuretic and exhibit antimicrobial activity to a greater extent due to high phenolic contents (Vankar et al. 2007). The coloring power of Juglans regia L. is attributed to the presence of napthoquinone class of natural colorants (Tsamouris et al. 2002). Out of the napthoquinone class, juglone (CI 75500) chemically 5-hydroxy-1, 4-napthoquinone (C10H6O3) shown in Fig. 1 acts as a substantive dye and imparts brown color to textile substrates (Chopra et al. 1996; Mirjalili et al. 2011; Mirjalili and Karimi 2013).
Chemical structure of juglone coloring compound
According to the literature, several studies have been reported on dyeing properties of walnut on different textile fibers. Ali et al. (2016) use the bark of walnut to study the effect of potassium aluminum sulphate mordant on dyeing properties of wool fibers. Hwang and Park (2013) focuses on the dyeing properties of silk fibers by the application of green walnut husk. Tutak and Benli (2011) used husk, leaves, and shell of walnut tree to study dyeing properties on different textile fibers. This study is the extension of work reported by Ali et al. (2016) in which only aluminum potassium sulfate mordant is used to study its effect on dyeing and fastness properties of wool fibers dyed with walnut bark. The present work focuses on the change in colorimetric and fastness properties of wool fibers by the effect of aluminum potassium sulfate, ferrous sulfate, and stannous chloride mordants in developing a wide range of beautiful shades on wool.
100% pure New Zealand Semi worsted woolen yarn (60 counts) was purchased from MAMB Woolens Ltd. Bhadohi, S R Nagar Bhadohi (U.P.), India. Walnut bark powder was purchased from SAM Vegetable Colours Pvt. Ltd. India. Metallic mordants such as aluminum potassium sulfate (Al2K2 (SO4)4.24H2O), ferrous sulfate (FeSO4.7H2O), and stannous chloride (SnCl2.2H2O). Hydrochloric acid (HCl), sodium hydroxide pellets, and sodium carbonate anhydrous used were of laboratory grade.
Extraction of natural dye from walnut bark
The color component was extracted from powder of walnut bark using aqueous extraction. Powdered walnut bark was taken in an aqueous solution using M:L (material to liquor) ratio 1:60 and kept for 12 h, then heated at 90 °C for 60 min with occasional stirring, cooled, and filtered. The remaining residue was again heated for two more times to get the maximum yield of colorant. The filtrate obtained was used for identification and dyeing of woolen yarns.
Spectral studies
The maximum absorbance wavelength (λmax) of the extracted dye from walnut bark was evaluated in aqueous solution by using Perkin Elmer Lambda-40 double beam UV–visible spectrophotometer. The UV-visible spectrum was obtained in the visible region 200–700nm. Fourier transform infrared spectroscopy (FTIR) of samples was recorded on a Bruker Tensor 37 FT-IR spectrophotometer ranging from 4000 to 500 cm−1. Discs were prepared by cutting samples of both pre- and post-mordanted dyed woolen yarn into fine pieces and grinded with KBr, used as internal standard.
Mild scouring of woolen yarn
Before the application of mordants, woolen yarn samples were soaked in non-ionic detergent solution (5 ml/L) as pre-treatment to enhance surface wettability (Sun and Tang 2011).
Mordanting process
Woolen yarn samples were mordanted by pre-mordanting method using 10% aluminum potassium sulfate (o.w.f.), 5% ferrous sulfate (o.w.f) and 1% stannous chloride (o.w.f.) (on the weight of fabric/yarn). Concentration of mordants was fixed as per our previous results (Shabbir et al. 2016). Mordants were dissolved in water and soaked woolen yarns samples were immersed in mordant solutions. The pH of mordant solution was kept at neutral and mordanting was done for 60 min with M:L of 1:40 at 90 °C. Mordanted woolen yarn samples were rinsed with running tap water to remove superfluous (unused) mordants.
The dye stock solution was prepared by dissolving 76.5 g of walnut into 3 litres of water. The extracted dye solution was divided into different concentrations ranging between 1 and 20% (o.w.f.). The dyeing experiments were performed using M: L ratio of 1:40 in separate baths with manual agitation at pH 7 using 1, 5, 10, 15, and 20% (o.w.f.) dye concentrations. Woolen yarns were drenched to dyeing baths containing warm dye solution. The dye bath temperatures were raised to simmering point (91–93 °C) and at a rate of 2 °C per min and maintained at that level for 60 min (Rather et al. 2016a, b). Finally dyed samples were washed with 5 ml/L non-ionic detergent (Safewash Wipro), rinsed with running tap water, and dried in shade.
Evaluation of color characteristics
The CIELab (L*, a*, b*, c*, ho) and color strength (K/S) values of dyed and mordanted dyed samples were evaluated by using Gretag Macbeth Color-Eye 7000 A Spectrophotometer connected to a computer with installed software of MiniScan XE Plus. The color strength (K/S) in the visible region of the spectrum (400–700 nm) was calculated based on the Kubelka–Munk Eq. 1.
$$ \frac{K}{S}=\frac{{\left(1-R\right)}^2}{2R} $$
Where K is absorption coefficient, S is scattering coefficient, and R is reflectance of dyed samples.
Chroma (c*) and hue angles (h o) were calculated using following equations:
$$ \mathrm{Chroma}\ \left({c}^{*}\right)=\sqrt{a^2+{b}^2} $$
$$ \mathrm{Hue}\ \mathrm{angle}\ \left(h{}^{\circ}\right)={ \tan}^{-1}\left(\raisebox{1ex}{$b$}\!\left/ \!\raisebox{-1ex}{$a$}\right.\right) $$
Fastness testing
The light fastness of dyed woolen yarn samples were conducted on digi light NxTM having water cooled Mercury Blended Tungsten lamp as per test method AATCC 16E-1993 (2004) similar to ISO 105-B02:1994 (Amd.2:2000). The wash fastness of dyed woolen yarn samples was measured in Launder-o-meter as per the ISO 105-C06:1994 (2010) specifications. Dry and wet rub fastness of the dyed woolen yarn samples were tested using a Crock-meter as per Indian standard IS 766:1988 (Reaffirmed 2004) based on ISO 105-X12:2001 by mounting the fabric on panel and giving ten strokes for dry and wet rub fastness tests. The samples were assessed for staining on white adjacent fabrics (wool and cotton).
In the endeavor to explore novel adsorbent systems and to determine the extent of adsorption (efficiency of particular adsorbent), it is essential to establish the most appropriate adsorption equilibrium correlation, which is indispensable for reliable prediction of adsorption parameters (Rather et al. 2016; Srivastava et al. 2006). In the perspective of equilibrium relationships (adsorption isotherms), the interaction of adsorbents (dyes and mordants) with the adsorbent materials (wool fiber) is discussed and are critical for optimization of the adsorption mechanism pathways, expression of surface properties, efficiencies of adsorbents, and effective design of the adsorption systems (Rather et al. 2016; Gimbert et al. 2008). In the present study, the interaction between woolen yarn samples, mordants, and dye molecules was studied on the basis of enhanced color strength values (K/S) after mordanting and dyeing processes. Additionally, UV-Visible and FTIR spectral analysis were used to identify the chromophoric groups present in dye molecules which are supposed to be the main contributors of enhanced chemical interactions (Rather et al. 2016a, b).
UV absorption and FTIR spectral studies
Absorption spectra of natural dyes depends upon the nature, number, and position of chromophore and auxochromic groups, as well as the type and polarity of the solvent used for analyzing absorption spectra (Oakes & Dixon 2004). Figure 2 shows the UV spectra of extracted dye from J. regia L. Absorption spectra of J. regia dye shows two major bands in the region of λmax 229 and 280 nm which are ascribed to π-π* and n-π* transitions of carbonyl group, respectively (Cotton and Wilkinson 1972).
UV visible spectra of Juglans regia L. bark extract
The FTIR spectra of J. regia dye displays two intense bands in the region of 3220 and 1639 cm-1, corresponding to hydrogen bonded –OH stretching and –C-O stretching frequency (Fig. 3).
FTIR spectra of Juglans regia L. bark extract
Colorimetric properties
The color strength (K/S) values and color parameters such as L*, a*, b* of unmordanted and mordanted woolen yarn dyed with walnut bark extracts are shown in Table 1. It is evident from the results that color strength (K/S) values increases with increase in concentration of walnut dye. In general, dyeing at low dye concentration resulted high L* and low a* values means brighter and less red color shade. This is attributed to concentration gradient of dye on fiber via adsorption. Increase in dye bath concentration leads more dye transfer to the fabric, and thus a higher apparent depth of color occurs (Rather et al. 2016). According to the results expressed in Fig. 4 and Table 1, the unmordanted dyed woolen yarn showed lower dye uptake (low K/S values) compared with mordanted dyed woolen yarn (higher K/S values). Mordanting increases interaction between woolen yarn functional groups (amine functionality) and dye functional groups (hydroxyl and carbonyl groups), resulting in increased dye exhaustion values which can be directly correlated with increase color strength values of dyed woolen yarn samples (Rather et al. 2016a, b). The proposed schematic representation of increased interaction between dye and woolen yarn through mordanting process (Ferrous sulfate mordanting) is shown in Fig. 5.
Table 1 Colorimetric properties of unmordanted and mordanted dyed wool samples
Effect of mordants and dye concentrations on color strength (K/S) values dyed samples
Schematic representation of wool-mordant-dye interaction
Among the mordanted samples, ferrous sulfate treated samples shows higher color strength values than alum and stannous chloride mordanted samples, results in darker shades in ferrous sulfate mordanted samples. The activity sequence in terms of increasing color strength values follows the order ferrous sulfate > stannous chloride > alum > unmordanted woolen yarn samples. This is ascribed to strong coordinate complex formation tendency of ferrous sulfate within the fiber (Fig. 5) (Bhattacharya and Shah 2000).
From the experimental results of a*-b* plot (Table 1, Fig. 6), it is clearly indicated that color coordinates of all dyed samples (control and mordanted) lie in the red-yellow quadrant of CIEL*a*b* color space diagram. Use of metal salts significantly altered the colorimetric data owing to their complexation and interaction developed with woolen yarn. Alum mordant shifts color coordinates more towards yellow region, where as ferrous sulfate mordant shifts color coordinates more towards red region of color space diagram. However, the effect of stannous chloride mordant was found highly diversified with the change in dye concentration, although bright yellow shades were obtained. On the basis of the color strength (K/S) values and CIEL*a*b* parameters, it can be concluded that walnut dye with or without mordants can be successfully used as natural colorant for developing variety of shades of different hue and tone (Fig. 7).
a*-b* plot of unmordanted and mordanted dyed samples
Shade card of the dyed samples
The color fastness characteristics (light, wash and rub) of all mordanted and unmordanted dyed woolen yarn samples are given in Table 2. It is from the color fastness tests that all samples show very good light fastness ratings of 5 on blue scale. Mordanting has been found to have no effect on light fastness ratings of dyed samples.
Table 2 Fastness properties of unmordanted and mordanted dyed wool samples
The wash fastness results of all samples were found to be in the range of fairly good to good ratings of 3–5. Mordanting with different metal salts has significantly altered wash fastness ratings of dyed woolen yarn. Alum and ferrous sulfate positively affected the color change in wash fastness results but tin mordants reduce the rating up to 3. Fastness results are also proving the better complexation of iron mordant with wool fiber and dye molecules. The color change in dyed wool of all samples was found to be from fairly good to good level rating of 3–4, whereas the color staining on wool and cotton was found to be very good rating of 5. Color fastness to crocking was found to be within the range of 3–5 means fairly good to good level in all dyed yarn samples. Woolen yarn samples dyed with low concentration showed better wash and rub fastness properties. This is attributed to the fact that at higher dye concentration there is leeching of color from dyed wool samples due to physical adsorption. The results of present study are in good correlation with previously reported work by Ali et al. (2016).
Woolen yarn samples were dyed with a natural coloring agent extracted from walnut bark in order to develop natural shades in conjunction with small amounts of different metallic salt mordants. Novel and fashionable shades of light and bright brown shades were observed in alum mordanted dyed samples; reddish brown shades in stannous chloride mordanted samples, and dark brown shades in ferrous sulfate mordanted samples (Fig. 7). The maximum relative color strength followed the trend as ferrous sulfate mordanted > stannous chloride > alum > unmordanted. All the dyed woolen yarn samples irrespective of metal mordants showed good to very good light fastness ratings. The wash fastness property was found to be from fairly good to excellent level in most of the cases whereas rub fastness was observed from fairly good to good level in most of cases. Based on the results of colorimetric evaluation as well as fastness properties, it can be concluded that dye obtained from J. regia L. bark has promising future in textile dyeing industry.
Ali, M. A., Almahy, H. A., & Band, A. A. A. (2013). Extraction of carotenoids as natural dyes from the Daucus carota Linn (carrot) using ultrasound in Kingdom of Saudi Arabia. Research Journal of Chemical Sciences, 31, 63–66.
Ali, M. K., Islam, S., & Mohammad, F. (2016). Extraction of natural dye from walnut bark and its dyeing properties on wool yarn. Journal of Natural Fibers, 13, 458–469.
Bechtold, T., Turcanu, A., Ganglberger, E., & Geissler, S. (2003). Natural dyes in modern textile dye house–how to combine experiences of two centuries to meet demands of the future? Journal of Cleaner Production, 11, 499–509.
Bhattacharya, S. D., & Shah, A. K. (2000). Metal ion effect on dyeing of wool fabric with catechu. Coloration Technology, 116, 10–12.
Chopra, R. N., Nayar, S. L., & Chopra, R. C. (1996). Glossary of Indian medicinal plants (Including the Supplement) (p. 11). New Delhi: Council of Scientific and Industrial Research.
Cotton, F. A., & Wilkinson, G. (1972). Advanced inorganic chemistry: a comprehensive text (3rd ed.). New York: Wiley.
Cristea, D., & Vilarem, G. (2006). Improving light fastness of natural dyes on cotton yarn. Dyes and Pigments, 70, 238–245.
El-Nagar, K., Sanad, S. H., Mohamed, A. S., & Ramadan, A. (2005). Mechanical properties and stability to light exposure for dyed Egyptian cotton fabric with natural and synthetic dyes. Polymer-Plastics Technology and Engineering, 44, 1269–1279.
Gimbert, F., Morin-Crini, N., Renault, F., Badot, P. M., & Crini, G. (2008). Adsorption isotherm models for dye removal by cationized starch-based material in a single component system: Error analysis. Journal of Hazardous Materials, 157, 34–46.
Grifoni, D., Bacci, L., Zipoli, G., Carreras, G., Baronti, S., & Sabatini, F. (2009). Laboratory and outdoor assessment of UV protection offered by Flax and Hemp fabrics dyed with natural dyes. Photochemistry and Photobiology, 85, 313–320.
Hwang, J. S., & Park, S. Y. (2013). Application of green husk of Juglans regia Linn and effect of mordants for staining of silk. Journal of Convergence In Technology (JCIT), 12, 279–283.
Hwang, E. K., Kim, M. S., Lee, D. S., & Kim, K. B. (1998). Colour development of natural dyes with some mordants. Journal of Korean Fibre Society, 35, 490–497.
Islam, S., & Mohammad, F. (2014). Emerging green technologies and environment friendly products for sustainable textiles. In S. S. Muthu (Ed.), Roadmap to sustainable textiles and clothing (pp. 63–82). Singapore: Springer.
Islam, S., Rather, L. J., Shahid, M., Khan, M. A., & Mohammad, F. (2014). Study the effect of ammonia post-treatment on color characteristics of annatto-dyed textile substrate using reflectance spectrophotometery. Industrial Crops and Products, 59, 337–342.
Jothi, D. (2008). Extraction of natural dyes from African marigold flowers (Tagetes erecta) for textile coloration. Autex Research Journal, 8, 49–53.
Khan, M. A., Khan, M., Srivastava, P. K., & Mohammad, F. (2006). Extraction of natural dyes from cutch, ratanjot and madder, and their application on wool. Colourage, 53, 61–68.
Khan, S. A., Ahmad, A., Khan, M. I., Yusuf, M., Shahid, M., Manzoor, N., & Mohammad, F. (2012). Antimicrobial activity of wool yarn dyed with Rheum emodi L. (Indian Rhubarb). Dyes and Pigments, 95, 206–214.
Lee, Y. H., & Kim, H. D. (2004). Dyeing properties and colour fastness of cotton and silk fabrics dyed with Cassia tora L. extract. Fibers and Polymers, 5, 303–308.
Lee, Y., Hwang, E., & Kim, H. (2009). Colorimetric assay and antibacterial activity of cotton, silk and wool fabrics dyed with peony, pomegranate, clove, Coptis chinenis and gallnut extracts. Materials, 2, 10–21.
Micheal, M. N., Tera, F. M., & Aboelanwar, S. A. (2003). Colour measurements and colourant estimation of natural red dyes on natural fabrics using different mordants. Colourage, 1, 31–42.
Mirjalili, M., & Karimi, L. (2013). Extraction and characterization of natural dye from green walnut shells and its use in dyeing polyamide: focus on antibacterial properties. Journal of Chemistry, 0, 1–9.
Mirjalili, M., Nazarpoor, K., & Karimi, L. (2011). Extraction and identification of dye from walnut green husks for silk dyeing. Asian Journal of Chemistry, 23, 1055–1059.
Oakes, J., & Dixon, S. (2004). Physical interactions of dyes in solution – influence of dye structure on aggregation and binding to surfactants J polymers. Review of Progress in Coloration Technology, 34, 110.
Rather, L. J., Islam, S., & Mohammad, F. (2015). Study on the application of Acacia nilotica natural dye to wool using fluorescence and FT-IR spectroscopy. Fibers and Polymers, 16, 1497–1505.
Rather, L. J., Islam, S., Azam, M., Shabbir, M., Bukhari, M. N., Shahid, M., Khan, M. A., Haque, Q. M. R., & Mohammad, F. (2016). Antimicrobial and fluorescence finishing of woolen yarn with Terminalia arjuna natural dye as an ecofriendly substitute to synthetic Antibacterial agents. RSC Advances, 6, 39080–39094.
Rather, L. J., Islam, S., Khan, M. A., & Mohammad, F. (2016). Adsorption and kinetic studies of Adhatoda vasica natural dye onto woolen yarn with evaluations of colorimetric and fluorescence characteristics. Journal of Environmental Chemical Engineering, 4, 1780–1796.
Rather, L. J., Islam, S., Shabbir, M., Bukhari, M. N., Shahid, M., Khan, M. A., & Mohammad, F. (2016). Ecological dyeing of woolen yarn with Adhatoda vasica natural dye in the presence of biomordants as an alternative copartner to metal mordants. Journal of Environmental Chemical Engineering, 4, 3041–3049.
Samanta, A. K., & Agarwal, P. (2009). Application of natural dyes on textiles. Indian Journal of Fibre and Textile Research, 34, 384–399.
Shabbir, M., Islam, S., Bukhari, M. N., Rather, L. J., Khan, M. A., & Mohammad, F. (2016). Application of Terminalia chebula natural dye on wool fiber—evaluation of color and fastness properties. Textiles and Clothing Sustainability, 2, 1–9.
Shahid, M., Ahmad, A., Yusuf, M., Khan, M. I., Khan, S. A., Manzoor, N., & Mohammad, F. (2012). Dyeing, fastness and antimicrobial properties of woolen yarns dyed with gallnut (Quercus infectoria Oliv.) extract. Dyes and Pigments, 95, 53–61.
Shahid, M., Islam, S., & Mohammad, F. (2013). Recent advancements in natural dye applications: a review. Journal of Cleaner Production, 53, 310–331.
Shaukat, A., Tanveer, H., & Rakhshanda, N. (2009). Optimization of alkaline extraction of natural dye from Henna leaves and its dyeing on cotton by exhaust method. Journal of Cleaner Production, 17, 61–66.
Siva, R. (2007). Review article. Status of natural dyes and dye-yielding plants in India. Current Science, 92, 916–925.
Srivastava, V. V., Swamy, M. M., Mall, I. D., Prasad, B., & Mishra, I. M. (2006). Adsorptive removal of phenol by bagasse fly ash and activated carbon: equilibrium, kinetics and thermodynamics. Colloids Surface A, 272, 89–104.
Sun, S. S., & Tang, R. C. (2011). Adsorption and UV protection properties of the extract from honeysuckle onto wool. Industrial & Engineering Chemistry Research, 50, 4217–4224.
Tsamouris, G., Hatziantoniou, S., & Demetzos, C. (2002). Lipid analysis of Greek walnut oil (Juglans regia L.). Zeitschrift fur Naturforschung. C. Journal of Biosciences (Z Naturforsch C Biosci), 57, 51–60.
Tutak, M., & Benli, H. (2011). Colour and fastness of fabrics dyed with walnut (Juglans regia L.) Base natural dyes. Asian Journal of Chemistry, 23, 566–568.
Vankar, P. S., Rakhi, S., & Verma, A. (2007). Enzymatic natural dyeing of cotton and silk fabrics without metal mordants. Journal of Cleaner Production, 15, 1441–1450.
Yusuf, M., Shahid, M., Khan, M. I., Khan, S. A., Khan, M. A., & Mohammad, F. (2015). Dyeing studies with henna and madder: a research on effect of tin (II) chloride mordant. Journal of Saudi Chemical Society, 19, 64–72.
Financial support provided by University Grants Commission, Govt. of India; through Central University Fellowship for Mohd Nadeen Bukhari and BSR Fellowship for Meritorious Students for Mohd Shabbir and Luqman Jameel Rather, are highly acknowledged.
MNB carried out the dyeing studies. SI interpreted experimental data and drafted the manuscript. MS and LJR helped in carrying out experiments. MS and MAK helped in characterizations. US helped in designing of shade cards. FM designed the experimental protocol along with MNB. All authors read and approved the final manuscript.
Department of Chemistry, Jamia Millia Islamia, New Delhi, 110025, India
Mohd Nadeem Bukhari, Shahid-ul-Islam, Mohd Shabbir, Luqman Jameel Rather, Mohd Shahid, Urvashi Singh & Faqeer Mohammad
Department of Post Harvest Engineering and Technology, Faculty of Agricultural Sciences, A.M.U, Aligarh, 202002, U.P, India
Mohd Ali Khan
Mohd Nadeem Bukhari
Shahid-ul-Islam
Mohd Shabbir
Luqman Jameel Rather
Mohd Shahid
Urvashi Singh
Faqeer Mohammad
Correspondence to Faqeer Mohammad.
Bukhari, M.N., Shahid-ul-Islam, Shabbir, M. et al. Dyeing studies and fastness properties of brown naphtoquinone colorant extracted from Juglans regia L on natural protein fiber using different metal salt mordants. Text Cloth Sustain 3, 3 (2017). https://doi.org/10.1186/s40689-016-0025-2
Juglans regia L.
Napthoquinone
Fastness properties
|
CommonCrawl
|
Fractional input stability and its application to neural network
DCDS-S Home
Inclusion of fading memory to Banister model of changes in physical condition
doi: 10.3934/dcdss.2020050
Mittag-Leffler input stability of fractional differential equations and its applications
Ndolane Sene ,
Département de Mathématiques de la Décision, Université Cheikh Anta Diop de Dakar, Laboratoire Lmdan, BP 5683 Dakar Fann, Sénégal
* Corresponding author: Ndolane Sene
Received August 2018 Revised October 2018 Published March 2019
Full Text(HTML)
This paper addresses the Mittag-Leffler input stability of the fractional differential equations with exogenous inputs. We continuous the first note. We discuss three properties of the Mittag-Leffler input stability: converging-input converging-state, bounded-input bounded-state, and Mittag-Leffler stability of the unforced fractional differential equation. We present the Lyapunov characterization of the Mittag-Leffler input stability, and conclude by introducing the fractional input stability for delay fractional differential equations, and we provide its Lyapunov-Krasovskii characterization. Several examples are treated to highlight the Mittag-Leffler input stability.
Keywords: Fractional derivative, fractional differential equations with exogenous inputs, Mittag-Leffler input stable.
Mathematics Subject Classification: Primary: 26A33, 93D05; Secondary: 93D25.
Citation: Ndolane Sene. Mittag-Leffler input stability of fractional differential equations and its applications. Discrete & Continuous Dynamical Systems - S, doi: 10.3934/dcdss.2020050
T. Abdeljawad and V. Gejji, Lyapunov-Krasovskii stability theorem for fractional systems with delay, Rom. J. Phys., 56 (2011), 636-643. Google Scholar
Y. Adjabi, F. Jarad and T. Abdeljawad, On Generalized Fractional Operators and a Gronwall Type Inequality with Applications, Filo., 31 (2017), 5457-5473. doi: 10.2298/FIL1717457A. Google Scholar
A. Atangana and D. Baleanu, New fractional derivatives with nonlocal and non-singular kernel: Theory and application to heat transfer model, Therm. Scien., https://arXiv.org/abs/1602.03408 (2016).Google Scholar
D. Baleanu, A. K. Golmankhaneh and A. K. Golmankhaneh, The dual action of the fractional multi time hamilton equations, Inter. J. of Theo. Phys., 48 (2009), 2558-2569. doi: 10.1007/s10773-009-0042-x. Google Scholar
D. Baleanu, Z. B. Guvenc and J. A. Machado, New Trends in Nanotechnology and Fractional Calculus Applications, Springer, 2009. doi: 10.1007/978-90-481-3293-5. Google Scholar
N. A. Camacho, M. A. Duarte-Mermoud and J. A. Gallegos, Lyapunov functions for fractional order systems, Comm. Nonl. Sci. Num. Simul., 19 (2014), 2951-2957. doi: 10.1016/j.cnsns.2014.01.022. Google Scholar
W. S. Chung, Fractional newton mechanics with conformable fractional derivative, J. Comput. Appl. Math., 290 (2015), 150-158. doi: 10.1016/j.cam.2015.04.049. Google Scholar
M. Eslami, Exact traveling wave solutions to the fractional coupled nonlinear schrodinger equations, Appl. Math. Comput., 285 (2016), 141-148. doi: 10.1016/j.amc.2016.03.032. Google Scholar
E. F. D. Goufo, Chaotic processes using the two-parameter derivative with non-singular and non-local kernel: Basic theory and applications, Chaos, 26 (2016), 084305, 10 pp. doi: 10.1063/1.4958921. Google Scholar
E. F. D. Goufo, An application of the Caputo-Fabrizio operator to replicator-mutator dynamics: Bifurcation, chaotic limit cycles and control, The Euro. Phys. J. Plus, 133 (2018), 80. doi: 10.1140/epjp/i2018-11933-0. Google Scholar
E. F. D. Goufo and A. Atangana, Analytical and numerical schemes for a derivative with filtering property and no singular kernel with applications to diffusion, The Euro. Phys. J. Plus, 131 (2016), 269.Google Scholar
E. F. D. Goufo and T. Toudjeu, Around chaotic disturbance and irregularity for higher order traveling waves, J. of Math., 2018 (2018), Art. ID 2391697, 11 pp. doi: 10.1155/2018/2391697. Google Scholar
E. F. D. Goufo and J. Nieto, Attractors for fractional differential problems of transition to turbulent flows, J. of Comp. and Appl. Math., 339 (2018), 329-342. doi: 10.1016/j.cam.2017.08.026. Google Scholar
F. Jarad, T. Abdeljawad and D. Baleanu, On the generalized fractional derivatives and their caputo modification, J. Nonlinear Sci. Appl, 10 (2017), 2607-2619. doi: 10.22436/jnsa.010.05.27. Google Scholar
F. Jarad, E. Ugurlu T. Abdeljawad and D. Baleanu, On a new class of fractional operators, Adva. in Diff. Equa., 2017 (2017), 247. doi: 10.1186/s13662-017-1306-z. Google Scholar
U. N. Katugampola, A new approach to generalized fractional derivatives, Bull. Math. Anal. Appl., 6 (2014), 1-15. Google Scholar
N. Laskin, Fractional schrodinger equation, Phys. Review E, 66 (2002), 056108, 7 pp. doi: 10.1103/PhysRevE.66.056108. Google Scholar
Y. Li, Y. Q. Chen and I. Podlubny, Mittag-leffler stability of fractional order nonlinear dynamic systems, Auto., 45 (2009), 1965-1969. doi: 10.1016/j.automatica.2009.04.003. Google Scholar
K. S. Miller and B. Ross, An Introduction to the Fractional Calculus and Fractional Differential Equations, A Wiley-Interscience Publication. John Wiley & Sons, Inc., New York, 1993. Google Scholar
K. Oldham and J. Spanier, The Fractional Calculus Theory and Application Of Differentiation and Integration to Arbitrary Order, New York-London, 1974. Google Scholar
P. Pepe and Z. P. Jiang, A lyapunov-Krasovskii methodology for ISS and iISS of time-delay systems, Syst. Contr. Lett., 55 (2006), 1006-1014. doi: 10.1016/j.sysconle.2006.06.013. Google Scholar
I. Petras, Fractional-order Nonlinear Systems: Modeling, Analysis and Simulation, Springer Science and Business Media, 2011.Google Scholar
[23] I. Podlubny, Fractional Differential Equations: An Introduction to Fractional Derivatives, Fractional Differential Equations, to Methods of Their Solution and some of Their Applications, Mathematics in Science and Engineering, 198. Academic Press, Inc., San Diego, CA, 1999.
D. Qian, C. Li, R. P. Agarwal and P. J. Y. Wong, Stability analysis of fractional differential system with riemann-liouville derivative, Math. Comput. Model., 52 (2010), 862-874. doi: 10.1016/j.mcm.2010.05.016. Google Scholar
N. Sene, Lyapunov characterization of the fractional nonlinear systems with exogenous input, Fractal Fract., 2 (2018), 17. doi: 10.3390/fractalfract2020017. Google Scholar
N. Sene, On stability analysis of the fractional nonlinear systems with hurwitz state matrix, J. Fract. Calc. Appl., 10 (2019), 1-9. Google Scholar
N. Sene, A. Chaillet and M. Balde, Relaxed conditions for the stability of switched nonlinear triangular systems under arbitrary switching, Syst. Contr. Let., 84 (2015), 52-56. doi: 10.1016/j.sysconle.2015.06.004. Google Scholar
N. Sene, Fractional input stability and its application to neural network, Discrete Contin. Dyn. Syst. Ser. S, 13 (2020).Google Scholar
N. Sene, Exponential form for Lyapunov function and stability analysis of the fractional differential equations, J. Math. Comp. Scien., 18 (2018), 388-397. doi: 10.22436/jmcs.018.04.01. Google Scholar
E. D. Sontag, Smooth stabilization implies coprime factorization, Syst. Contr. Let., 34 (1989), 435-443. doi: 10.1109/9.28018. Google Scholar
E. D. Sontag, On the input-to-state stability property, Euro. J. Contr., 1 (1995), 24-36. doi: 10.1016/S0947-3580(95)70005-X. Google Scholar
A. R. Teel, Connections between razumikhin-type theorems and the ISS nonlinear small gain theorem, IEEE trans. on Auto. Control., 43 (1998), 960-964. doi: 10.1109/9.701099. Google Scholar
N. Yeganefar, P. Pepe and M. Dambrine, Input-to-state stability and exponential stability for time-delay systems: Further results, In Deci. and Contr., (2007), 2059–2064.Google Scholar
T. Zou, J. Qu, L. Chen, Yi Chai and Z. Yang, Stability analysis of a class of fractional-order neural networks, Indo. J. Elect. Engi. Comput. Sci., 12 (2014), 1086-1093. Google Scholar
Mehmet Yavuz, Necati Özdemir. Comparing the new fractional derivative operators involving exponential and Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 995-1006. doi: 10.3934/dcdss.2020058
Jean Daniel Djida, Juan J. Nieto, Iván Area. Parabolic problem with fractional time derivative with nonlocal and nonsingular Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 609-627. doi: 10.3934/dcdss.2020033
Antonio Coronel-Escamilla, José Francisco Gómez-Aguilar. A novel predictor-corrector scheme for solving variable-order fractional delay differential equations involving operators with Mittag-Leffler kernel. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 561-574. doi: 10.3934/dcdss.2020031
Ebenezer Bonyah, Samuel Kwesi Asiedu. Analysis of a Lymphatic filariasis-schistosomiasis coinfection with public health dynamics: Model obtained through Mittag-Leffler function. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 519-537. doi: 10.3934/dcdss.2020029
Chun Wang, Tian-Zhou Xu. Stability of the nonlinear fractional differential equations with the right-sided Riemann-Liouville fractional derivative. Discrete & Continuous Dynamical Systems - S, 2017, 10 (3) : 505-521. doi: 10.3934/dcdss.2017025
Ilknur Koca. Numerical analysis of coupled fractional differential equations with Atangana-Baleanu fractional derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 475-486. doi: 10.3934/dcdss.2019031
Fahd Jarad, Sugumaran Harikrishnan, Kamal Shah, Kuppusamy Kanagarajan. Existence and stability results to a class of fractional random implicit differential equations involving a generalized Hilfer fractional derivative. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 723-739. doi: 10.3934/dcdss.2020040
Kolade M. Owolabi, Abdon Atangana. High-order solvers for space-fractional differential equations with Riesz derivative. Discrete & Continuous Dynamical Systems - S, 2019, 12 (3) : 567-590. doi: 10.3934/dcdss.2019037
Francesco Mainardi. On some properties of the Mittag-Leffler function $\mathbf{E_\alpha(-t^\alpha)}$, completely monotone for $\mathbf{t> 0}$ with $\mathbf{0<\alpha<1}$. Discrete & Continuous Dynamical Systems - B, 2014, 19 (7) : 2267-2278. doi: 10.3934/dcdsb.2014.19.2267
Krunal B. Kachhia. Comparative study of fractional Fokker-Planck equations with various fractional derivative operators. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 741-754. doi: 10.3934/dcdss.2020041
Tomás Sanz-Perela. Regularity of radial stable solutions to semilinear elliptic equations for the fractional Laplacian. Communications on Pure & Applied Analysis, 2018, 17 (6) : 2547-2575. doi: 10.3934/cpaa.2018121
Ndolane Sene. Fractional input stability and its application to neural network. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 853-865. doi: 10.3934/dcdss.2020049
Yaozhong Hu, Yanghui Liu, David Nualart. Taylor schemes for rough differential equations and fractional diffusions. Discrete & Continuous Dynamical Systems - B, 2016, 21 (9) : 3115-3162. doi: 10.3934/dcdsb.2016090
Daria Bugajewska, Mirosława Zima. On positive solutions of nonlinear fractional differential equations. Conference Publications, 2003, 2003 (Special) : 141-146. doi: 10.3934/proc.2003.2003.141
Mahmoud M. El-Borai. On some fractional differential equations in the Hilbert space. Conference Publications, 2005, 2005 (Special) : 233-240. doi: 10.3934/proc.2005.2005.233
Roberto Garrappa, Eleonora Messina, Antonia Vecchio. Effect of perturbation in the numerical solution of fractional differential equations. Discrete & Continuous Dynamical Systems - B, 2018, 23 (7) : 2679-2694. doi: 10.3934/dcdsb.2017188
Joseph A. Connolly, Neville J. Ford. Comparison of numerical methods for fractional differential equations. Communications on Pure & Applied Analysis, 2006, 5 (2) : 289-307. doi: 10.3934/cpaa.2006.5.289
Hayat Zouiten, Ali Boutoulout, Delfim F. M. Torres. Regional enlarged observability of Caputo fractional differential equations. Discrete & Continuous Dynamical Systems - S, 2018, 0 (0) : 1017-1029. doi: 10.3934/dcdss.2020060
Christina A. Hollon, Jeffrey T. Neugebauer. Positive solutions of a fractional boundary value problem with a fractional derivative boundary condition. Conference Publications, 2015, 2015 (special) : 615-620. doi: 10.3934/proc.2015.0615
Sertan Alkan. A new solution method for nonlinear fractional integro-differential equations. Discrete & Continuous Dynamical Systems - S, 2015, 8 (6) : 1065-1077. doi: 10.3934/dcdss.2015.8.1065
PDF downloads (49)
HTML views (348)
on AIMS
Ndolane Sene
Content*
|
CommonCrawl
|
Neither a vector, nor a scalar
While I was reading a book on mechanics, when introducing the vector multiplication the author stated that multiplying two vectors can produce a vector, a scalar, or some other quantity.
1.4 Multiplying Vectors
Multiplying one vector by another could produce a vector, a scalar, or some other quantity. The choice is up to us. It turns out that two types of vector multiplication are useful in physics.
An Introduction to Mechanics, Daniel Kleppner and Robert Kolenkow
The authors then examine the scalar or "dot product" and the vector or "cross product" (the latter not shown in the above link; can be seen on Amazon's preview) but seem to make no mention of any other method.
My concern is not about vector multiplication here, but what can be that quantity which is neither a scalar nor a vector. The author has explicitly remarked the quantity as neither a scalar nor a vector.
What I think is that, when we define a vectors and scalars, we propose the definition in terms of direction. In one direction is considered and in another it is not considered. Then, how can this definition leave space for any other quantity being as none of the two?
I would be obliged if someone could explain me if the statement is correct and how is it so. Also it would be great if you can substantiate your argument using examples.
vectors tensor-calculus
Abhinav DhawanAbhinav Dhawan
$\begingroup$ Can you please cite the book and page number so that it's easier for readers to think about it? $\endgroup$ – Avantgarde Aug 23 '17 at 0:51
$\begingroup$ I think the author may have had a tensor in mind. $\endgroup$ – ZeroTheHero Aug 23 '17 at 0:53
$\begingroup$ The author is likely talking about either a two-form or some other general tensor. You can have tensor and wedge products, or, in Clifford algebra / geometric algebra, a generalized geometric product $\endgroup$ – WetSavannaAnimal Aug 23 '17 at 0:54
$\begingroup$ I was reading the well known "Kleppner and Kolenkow's Introduction to Mechanics" $\endgroup$ – Abhinav Dhawan Aug 23 '17 at 0:55
$\begingroup$ This is a somewhat advanced concept if you are not familiar with linear algebra, but you can always start here: en.m.wikipedia.org/wiki/Tensor $\endgroup$ – ZeroTheHero Aug 23 '17 at 1:01
If you have two vectors $\mathbf{a}$ and $\mathbf{b}$, the inner product $\mathbf{a} \cdot \mathbf{b}$ is a scalar, the cross product $\mathbf{a} \times \mathbf{b}$ is a vector and the dyadic product $\mathbf{a} \otimes \mathbf{b}$ is a matrix. It is defined as
$$\mathbf{a}\otimes\mathbf{b} = \mathbf{a b}^\mathrm{T} = \begin{pmatrix} a_1 \\ a_2 \\ a_3 \end{pmatrix}\begin{pmatrix} b_1 & b_2 & b_3 \end{pmatrix} = \begin{pmatrix} a_1b_1 & a_1b_2 & a_1b_3 \\ a_2b_1 & a_2b_2 & a_2b_3 \\ a_3b_1 & a_3b_2 & a_3b_3 \end{pmatrix} $$
It occurs a lot in the formalism of quantum mechanics where it is written as $|a \rangle \langle b|$ (using the so-called bra-ket notation by Dirac).
With regard to direction: if you apply a matrix to a vector, the vector may get stretched / compressed along multiple axes. So in contrast to a vector, a matrix involves multiple directions.
$\begingroup$ "Dyadic product" is also called an "outer product" or "tensor product". $\endgroup$ – DanielSank Aug 23 '17 at 2:06
$\begingroup$ It's also worth noting that, because of the universal property of the tensor product, any quantity that you could plausibly call a product (i.e. something linear in each factor) is a function of the tensor (dyadic) product. As examples, for the dot product it's the trace, and for the cross product it's the antisymmetric part (a.k.a. the Hodge dual). $\endgroup$ – Emilio Pisanty Aug 23 '17 at 4:37
$\begingroup$ Both Marc and Sal Elder's answers speak about using outer product in "Quantum Mechanics". While this remark is correct, I am afraid it gives the OP a wrong impression that he must work up to QM to meet these mathematical creatures. I would like to point out that outer products are used extensively in the more humble "continuum mechanics", of which solid mechanics and fluid mechanics are two branches. $\endgroup$ – Deep Aug 23 '17 at 5:08
$\begingroup$ In a mathematical sense, a matrix is part of a vector space so It is a vector. I get what you mean, but not the best example. $\endgroup$ – Lonidard Aug 23 '17 at 8:37
$\begingroup$ @Marc Thanks for your answer. I would love to read about tensors. Thanks again. $\endgroup$ – Abhinav Dhawan Aug 23 '17 at 9:45
For this answer, I it's worth listing the common (i.e. an exhaustive list of what I can think of right now) vector products and some idea of their physical meaning. These are:
The inner, scalar product that yields scalars and, more generally, other scalar -valued bilinear products of vectors;
The tensor product that yields a tensor of rank 2
The Exterior, Grassmann, Alternating or wedge product that yields a 2-form for vector arguments (a special kind of rank 2 tensor)
The Lie bracket between two vectors of a Lie algebra, which yields another vector in the same Lie algebra;
The Clifford product or "geometric" product (Clifford's name for this), which generalizes and unifies (1) and (3).
The cross product is actually a disguised version of (3) or of (4) that only works in three dimensional, Euclidean space.
We begin with a vector, or linear, space $\mathbb{V}$ over a field of scalars $K$, that comes kitted with addition, which makes the vector spaces an Abelian group. For most useful-for-physics discussions we want any product to be billinear, i.e. binary operation operator that is linear in both its arguments alternatively, that is distributive on left and right and respects scalar multiplication:
$$(\alpha\,a+\beta\,b) \ast (\gamma\,c+\delta\,d) = \alpha\,\gamma\,a\ast c+ \beta\,\gamma\,b\ast c + \alpha\,\delta\,a\ast d + \beta\,\delta\,b\ast d;\quad\forall \alpha,\,\beta,\,\gamma,\,\delta\in K\;\forall a,\,b,\,c,\,d\in \mathbb{V}\tag{1}$$
and this will be a fundamental assumption in what follows. A more abstract consideration, possibly not for a first reading, is that when considering all the different products with property (1) one can reduce the object under consideration to a linear function of one variable from the tensor product space $\mathbb{V}\otimes \mathbb{V}$ to the range of the binary operation. See the "Universal Property" section of the Wikipedia tensor product page. In this sense, the tensor product 2. above is the most general product possible.
1. Billinear Scalar Forms and The Dot Product
These are all entities that have the property (1) and map pairs of vectors to scalars. Their most general form is:
$$a\odot_M b = A^T\,M\,B\tag{2}$$
where $M$ is an $N\times N$ matrix where $N$ is the dimension of the vector space. The dot product is the special case where $M$ is symmetric and positive definite. But surely there are many more symmetric, positive definite matrices $M$ than simply the identity, which obviously gives the dot product from (2)? Actually, if $M$ is symmetric, (2) doesn't give us a great deal more generalness than the abstract inner product, but there are some interesting quirks. These quirks are most readily seen by applying Sylvester's Law of Inertia whereby there always exists a change of basis (invertible co-ordinate transformation) that reduces the matrix $M$ in (2) to the form
$$M=\mathrm{diag}(1,\,1\,\cdots,\,-1,\,-1,\,\cdots 0,\,0\,\cdots)\tag{3}$$
where the number of 1s, -1s and 0s is independent of the co-ordinate transformation. So we only need to look at (3) without loss of generalness. If there are all 1s, then we have an inner product, and we don't get anything essentially different to the dot product already discussed: a change of basis will force $M=\mathrm{id}$. If there are noughts present, then the form is degenerate, which means that there is at least one nonzero vector whose product with any other vector yields nought. If there are 1s and -1s present, then the product is not degenerate, but the notion of orthogonal complement breaks down: for any vector, there are always nonzero vectors whose product with the former is 0 and there are lightlight nonzero vectors whose self-product is nought. This nontrivially signatured case is what we come across in Minkowski spacetime and special and general relativity. If the matrix $M$ is skew-symmetric, then the product defined by (3) is called a symplectic form if $M$ is also nonsingular. The only nondegenerate symplectic products arise in even dimensional spaces. All (nondegenerate) symplectic forms can be rewritten, through co-ordinate transformations as a form wherein $M$ takes its canonical value:
$$M = \left(\begin{array}{cc}0&-\mathrm{id}\\\mathrm{id}&0\end{array}\right)\tag{4}$$
where $\mathrm{id}$ is the $N\times N$ identity matrix, when the dimensionality of the vector space is $2\,N$.
Now lets specialize back to "the" inner product (they are all essentially the same, through the considerations above). The reason the dot product is useful is kind of subtle at the undergrad level when it is first introduced, because the concept is usually introduced in a way that makes its reason for being tautologous: the dot product gives us a simple algorithm to compute the unique decomposition of any vector into superposition of basis vectors when the latter are orthonormal with respect to that product. This assertion actually isn't trivial or self evident: it rests on the Gram-Schmidt Procedure: It follows from the axioms defining the vector space notion the axiom that a vector space is spanned by a basis and the number of basis vectors must be independent of choice. Moreover, if we introduce a scalar billinear inner product, i.e. one that is strictly positive whenever the inner product is made between a nonzero vector and itself and zero when the zero vector is multiplied with itself, then one can always choose the basis to be orthonormal by using the GS procedure beginning with any basis.
This is all swept under the rug when we first learn about it because usually the basis is postulated as self evident and it is always orthonormal, as when we learn about $xyz$ co-ordinates in high school, so these details are hidden - nondistracting, perhaps, but also nondiscoverable. If you begin with the abstract definition, then it is easy to show that if we represent vectors $a\,b\,c,\cdots$ as column matrices $A,\,B,\,C,\,\cdots$ containing their components with respect to an orthonormal basis, then:
$$a\cdot b = A^T\,B = \left(\begin{array}{ccc}a_1&a_2&\cdots\end{array}\right)\left(\begin{array}{c}a_1\\a_2\\\vdots\end{array}\right)=\sum\limits_i\,a_i\,b_i\tag{5}$$
and, from elementary trigonometry, it is "intuitively obvious" on applying this concept that $a\cdot b = \sqrt{a\cdot a}\,\sqrt{b\cdot b}\,\cos\theta = |a|\,|b|\,\cos\theta$, although rigorously, we use (5) to define the angle between two vectors because the Cauchy-Schwarz Inequality, which follows from the abstract properties of the inequality alone, shows that the range of $a\cdot b/(\sqrt{a\cdot a}\,\sqrt{b\cdot b})$ is precisely the interval $[-1,\,1]$, so there is no problem in defining this to be a cosine.
Experimentally it is found that the work done $F\cdot s$ by a force $F$ in displacing an object $s$ equals the change in that object's total energy; this can be derived from Newton's laws, thus the experimental observation indirectly further confirms the latter.
2. Tensors and Tensor Product
To keep this already cluttered answer contained, I will not discuss dual vectors (covectors) in connexion with tensors.
The section above on billinear forms is quite a deal to absorb, but tensors are only a small step beyond the above. A general tensor is simply a multilinear, scalar valued function of vectors. Thus we can think of the dot product - the mapping $\cdot: \mathbb{V}\times\mathbb{V}\to K$ itself considered as a whole rather than the particular outcome of the dot product for two particular vectors - as a rank two tensor. That is, a billinear, scalar valued function of two vectors, given by (2) when $M=\mathrm{id}$. When thought of as a tensor in this way, the dot product function is better known as the Kronecker delta.
The tensors of rank $r$ (i.e. multilinear functions of $r$ vectors) for an $N$-dimensional vector space are themselves a vector space, this time of dimension $N^r$. A multilinear function of $r$ vectors is wholly specified, by definition of linearity, by its value at the $N^r$ possible combinations of the $N$ basis vectors input into the function's $r$ arguments. All other values follow simply by linear superposition. For a rank 2 tensor, we simply write these basis values into a matrix, and then (2) will work out the tensor's value. So we can identify rank 2 tensors with matrices. A rank 3 tensor, i.e. trillinear function would require a version of (2) with a three dimensional array and so on.
Some tensors can be written as the products of vectors. The tensor product does this. The tensor product is also called the outer product when specialized to vector arguments. The tensor and outer products for vectors and rank 2 tensors are implemented by the Kronecker matrix product when we have component and matrix representations of the vectors / tensors in question. This is the product discussed and made explicit in Marc's answer and Sal Elder's answer. A general rank 2 tensor is a superposition of outer products of vectors.
3. Exterior, Wedge, Alternating or Grassmann Product
A particular superposition of outer products of pairs of vectors is one where the products of pairs themselves arise in skew-symmetric pairs. That is, it is a superposition of pairs of the form:
$$x \wedge y \stackrel{def}{=} x\otimes y -y\otimes x\tag{6}$$
and $x \wedge y$ is a billinear function of vectors whose matrix (as in (2)) is skew-symmetric. General skew-symmetric rank 2 tensors are made up of superpositions of these wedge products of vectors. The functions they define are called symplectic forms when the form is nondegenerate, which can only happen in even dimensional spaces.
Geometrically, $x\wedge y$ represents a directed area. Its magnitude (Euclidean norm of the $N\,(N-1)/2$ nonzero components) is the area enclosed by the parallelogram defined by the vectors $x$ and $y$. Threefold wedges represent directed volumes and multi wedges represent directed hypervolumes. In $N$ dimensions, there is only one nonzero component of the wedge of $N$ vectors, given by the determinant of the matrix with these vectors as its columns, and the signed volume of the $N$-dimensional parallelopiped defined by these vectors.
Some of these properties should sound familiar as properties of the cross product. Indeed, in 3 dimensions, a directed plane can be defined by its unit normal vector, and contrariwise. This doesn't work in higher dimensions - a plane's orthogonal complement is also a plane. An operation called the Hodge dual generalizes this valid-only-in-3-dimensions isomorphism between directed planes and unit normals to other dimensions. In three dimensions, it picks off the nonzero components of the wedge of the three dimensional vectors $x$ and $y$ to define the cross product:
$$\begin{array}{lcl}x\wedge y &=& x\otimes y - y \otimes x=\left(\begin{array}{c}x_1\\x_2\\x_3\end{array}\right)\left(\begin{array}{ccc}y_1&y_2&y_3\end{array}\right)-\left(\begin{array}{c}y_1\\y_2\\y_3\end{array}\right)\left(\begin{array}{ccc}x_1&x_2&x_3\end{array}\right) \\\\ &=& \left(\begin{array}{ccc}0&x_1 y_2-y_1 x_2 & -(x_3 y_1 - y_3 x_1) \\-(x_1 y_2-y_1 x_2)&0&x_2 y_3 - y_2 x_3\\x_3 y_1 - y_3 x_1&-(x_2 y_3 - y_2 x_3)& 0\end{array}\right)\\\\&\stackrel{\text{ Hodge dual}}{\rightarrow} &\left(\begin{array}{c}x_2 y_3 - y_2 x_3\\x_3 y_1 - y_3 x_1\\x_1 y_2-y_1 x_2\end{array}\right) \stackrel{def}{=} x\times y\end{array}\tag{7}$$
4. Lie Brackets on Lie Algebras
A Lie bracket $[]:\mathbb{V}\times \mathbb{V} \to \mathbb{V}$ on a vector space $\mathbb{V}$ is, by definition, any skew-symmetric, billinear, binary, vector-valued operation that fulfills the Jacobi identity:
$$[a,\,[b,\,c]] + [c,\,[a,\,b]]+[b,\,[c,\,a]]=0;\quad \forall a,\,b,\,c\in\mathbb{V}\tag{8}$$
A vector space with such a product is a Lie algebra. (8) is easily remembered by cycling the arguments of the double Lie brackets to get from one term to the next. Or, if you're moved by risqué ditties, von Neumann's mnemonic that "a spanks b whilst c watches, c spanks a whilst b watches, and then b spanks c whilst a watches".
For any finite dimensional Lie algebra with a vector space whose scalars are from a field of characteristic nought, one can find a faithful representation of the algebraic structure as vector space of square matrices of the same dimension with the commutator product $[a,\,b]=a\,b - b\,a$. This assertion is Ado's Theorem; not every vector space of matrices forms a Lie algebra in this way, because the space has to be closed under the commutator bracket. It is this last observation that makes Ado's theorem a difficult thing to prove. (I believe there's a generalization that removes the zero characteristic restriction, but I've never looked it up).
The most important fact in physics and mechanics is that Lie algebras represent the differential action of a Lie group, of which the group of rotations is an example. The Lie algebra of the Lie group is the group's tangent space at the identity. A Lie group acts on its own Lie algebra through the Adjoint representation; for a matrix Lie group, the action of group member $\gamma$ on the Lie algebra member $Y$ can be written $Y\mapsto \gamma\,Y \,\gamma^{-1}$ (where juxtaposition the expression on the right is simply the matrix product). If the Lie group member is of the form $\exp(s\,X)$, where $X$ is also a member of the group's Lie algebra (and the identity connected component of a Lie group can always be written as a finite product of such entities), then the action is of the form $Y\mapsto \exp(s\,X)\,Y \,\exp(-s\,X)$. The derivative of this expression, i.e. the "infinitessimal" action of the Lie group on its own algebra, defines the Lie bracket, i.e.:
$$[X,\,Y]\stackrel{def}{=} \left.\frac{\mathrm{d}}{\mathrm{d} s} \exp(s\,X)\,Y \,\exp(-s\,X)\right|_{s=0}\tag{9}$$
In a matrix Lie group, it's a simple matter to check that this definition defines the commutator bracket:
$$[X,\,Y] = X\,Y - Y\,Y\tag{10}$$
There is a simple way to give meaning to the mapping $Y\mapsto \gamma\,Y \,\gamma^{-1}$ in a general Lie group where there is no matrix product defined, so the above ideas are general.
Now, for example, if we are talking about the three dimensional rotation group, which is $SO(3)$, it is easy to show that the Lie algebra is the algebra of $3\times 3$ skew-symmetric matrices. To do this, write a group member near the identity as $\mathrm{id} + \delta\, X + O(\delta^2)$, where $X$ is any Lie algebra member. The rotations are the proper isometries with respect to the dot product norm in Euclidean space, i.e. the conserve the inner product. This conservation under the action of the rotation matrix $\mathrm{id} + \delta\, X + O(\delta^2)$ on column vectors is written as:
$$((\mathrm{id}+ \delta\, X + O(\delta^2))\,x)^T \,(\mathrm{id}+ \delta\, X + O(\delta^2)) y = x^T\,y;\; \forall x,\,y\in\mathbb{R}^N \Leftrightarrow X + X^T = 0\tag{11}$$
So any small angle rotation of a vector is well approximated by the action of a skew-symmetric matrix on that vector. One property that the rotation group enjoys (as do all Lie groups without continuous centers) is the adjoint action of the group on its own Lie algebra defines exactly the same group. Furthermore, in three dimensions, the rotation group has three dimensions, exactly like the Euclidean space acted on by the matrix rotation group $SO(3)$. So in three dimensions only, the rotation group's group and action on the Lie algebra is exactly the same as the group where members are thought of as rotation matrices acting on column vectors. So consider the three dimensional Hodge dual operation at the very right of equation (7). By the equality of both $SO(3)$ group and its adjoint action image on the one hand and of the adjoint action and matrix-on-column vector action on the other, we have the following procedure for approximating a rotation of the column vector $y$ by a small angle $\theta$ about the axis defined by the unit vector $\hat{X}$. Alternatively, let's absorb $\theta$ into $\hat{X}$ to get a nonunit magnitude vector $X$, whose magnitude is $\theta \approx \sin\theta$ where $\theta$ is small:
Convert column vectors $X$ and $Y$ to skew-symmetric matrices $X^s,\, Y^s$ using the inverse Hodge dual operation;
Compute the Lie bracket $[X^s,\,Y^s] = X^s Y^s - Y^s X^s$
Convert the $Z^s=[X^s,\,Y^s]$ back to a column vector by extracting its nonzero components and arranging them into a column vector $Z$ through the Hodge dual
The rotated image is $Y+Z$
You can check that steps 1, 2 and 3 define the cross product $Z=X\times Y$. You can also check that the cross product fulfills the Jacobi identity.
Alternatively, if an object is spinning, the above procedure shows that the change in $Y$ over time $\delta t$ is, to first order, $\delta t\, \omega\times Y$, where $\omega$ is the vector along the axis of rotation and whose magnitude is the angular speed. Whence the formula $V = \omega\times Y$ for the velocity of the head of vector $Y$.
Note that the special conditions on dimension discussed above are equivalent to the assertion that the rotation axis notion only works in dimension 3 and lower. In higher dimensions, rotations must be defined by their planes.
Higher dimensional rotation groups also define Lie bracket products between the planes (skew-symmetric two forms) corresponding to their Lie algebra members, and this product works perfectly well and logically. But since the dimension of the $N$ dimensional rotational Lie group's algebra is $N\,(N-1)/2$ is much greater than the dimension $N$ of the rotated column vectors, the Lie algebra members do not correspond to vectors in the underlying space anymore and there is no simple relationship between the group actions.
5. The Clifford Product
This product has already been discussed in Timo's Answer.
The wedge product of section 3 defines an exterior algebra, which can be thought of as a "degenerate" Clifford algebra where the product of something with itself vanishes (as is the case for all alternating forms).
A Clifford algebra is an algebra generated by an abstract billinear product with a vector space as a subspace of generators. Aside from billinearity, the algebra is free aside from the condition that a generator vector multiplied by itself is quadratic form, i.e. an entity defined by (2) where we can, without loss of generalness, take $M$ to be symmetric (since we only ever invoke (2) for this lone, unfree condition, i.e. we only invoke (2) when $a$ and $b$ are equal). This concept generalizes the exterior algebra of section 3, which can be thought of as a Clifford algebra where $M=0$.
Geometric algebras are Clifford algebras where the quadratic form is nondegenerate, i.e. $M$ is nonsingular.
WetSavannaAnimalWetSavannaAnimal
$\begingroup$ Wow ! That's the most extended answer I've seen for a question on stack exchange. +1 just for that ! $\endgroup$ – Cham Aug 29 '17 at 0:46
$\begingroup$ @Someone, well, I hope you find it useful and +1 is not simply "just for that". We get similar questions on Phys SE from time to time, so I thought I'd write a reasonably full summary that could be referenced in a comment when the question inevitably comes up again. Maybe you'll find it useful in another branch of the Multiverse! $\endgroup$ – WetSavannaAnimal Aug 29 '17 at 1:09
$\begingroup$ For the record, I +1 for finding it genuinely helpful. Thanks for your effort! $\endgroup$ – Marc Sep 4 '17 at 16:40
Say I write a column vector followed by a row vector. For example, $$\begin{bmatrix}1\\ 0\\ 0\end{bmatrix}\begin{bmatrix}1 & 0 & 0\end{bmatrix}.$$ That's a matrix! Specifically, it would be $$\begin{bmatrix}1&0&0\\ 0&0&0 \\ 0&0&0\end{bmatrix}.$$ In an intro physics class, you probably don't have to worry about this sort of thing. You'll see it again in quantum mechanics.
Sal ElderSal Elder
$\begingroup$ @Marc and Sal Elder, a thing I want to ask is that can the tensors also represent a physical quantities, which according to me is not possible as you said that a matric has multiple directions. What are your thoughts on this? $\endgroup$ – Abhinav Dhawan Aug 23 '17 at 9:52
$\begingroup$ @AbhinavDhawan: Why couldn't a physical quantity have multiple dimensions? A stress tensor is a well-known example. Take a metal rod (long cylinder) and twist it. I.e. rotate one end while holding the other end fixed. This will cause an elastic deformation of the rod, with mechanical stresses in the material. Independently, you can pull on the end of the rod. This also causes an elastic deformation in the length direction. In other words, the total deformation (and resulting elastic stress) are multi-dimensional, where the values along the various dimensions can take independent values. $\endgroup$ – MSalters Aug 23 '17 at 12:42
$\begingroup$ @AbhinavDhawan Tensors do represent a lot of very important physical quantities. Advanced physics would be impossible without tensors. For example: stress tensor, electromagnetic field tensor, Riemann tensor, tensor of deformation, electric permitivity tensor (for anisotropic media), and many more. See here: en.wikipedia.org/wiki/Tensor#Applications $\endgroup$ – mpv Aug 23 '17 at 12:44
$\begingroup$ @Abhinav Dhawan, it may be interesting to compare electromagnetism with gravity (as described by general relativity) [caveat: my understanding of general relativity is only superficial]. The electric field is a vector field which means that it assigns a vector $\mathbf{E}(\mathbf{x})$ to every point in space. If we have an electromagnetic wave, the length of our electric field vectors oscillates with time. If we take a ring of electric charges and put it in the region where our electromagnetic wave acts, the whole ring oscillates up and down. $\endgroup$ – Marc Aug 23 '17 at 20:59
$\begingroup$ The gravitational field on the other hand is a tensor field and assigns a two-dimensional tensor to every point in space. If we have a gravitational wave and put a ring of masses in a region where it acts, the ring gets compressed and stretched in different directions. See en.wikipedia.org/wiki/Gravitational_wave#Effects_of_passing. $\endgroup$ – Marc Aug 23 '17 at 21:02
While this is not discussed in the book, one possible, and often very useful, way to multiply vectors to get "some other quantity" is called the geometric product, or a Clifford product in maths terminology.
Specifically, given two vectors $u, v$ which are orthogonal, then the product $uv$ is a $bivector$, which has an interpretation as the oriented plane spanned by $u$ and $v$, with magnitude equal to $|u||v|$. By oriented we mean that $uv = -vu$, so the product of orthogonal vectors anticommutes.
We further set $u^2 = u u = |u|^2$, where the norm is the usual one induced by the inner product on $\mathbb{R}^n$. Then saying that this product is associative and distributive (i.e. you can work with it in the same way as scalar multiplication, except that it doesn't commute), fixes the smallest algebra that has this kind of a multiplication. You can easily see that having fixed how a vector squares and what is the product of anticommuting vectors fixes the product of two vectors in any orientation.
Once we get this far, it's apparent that there has to be more in the resulting algebra than just scalars, vectors and bivectors. Indeed, given any distinct mutually orthogonal vectors $a_1, a_2, \ldots, a_k$, the product $a_1 a_2 \cdots a_k$ is an object called a $k$-vector or a $k$-blade. In $n$ -dimensions, the biggest possible blade is an $n$ -blade, since there are no more orthogonal distinct vectors. Intuitively, a $k$ -blade is a $k$ -dimensional oriented volume with a magnitude.
Now let $u$ and $v$ be arbritrary, not necessarily parallel or orthogonal, and denote by $u\wedge v$ the operation of constructing the bivector corresponding to the plane where $u$ and we $v$ lie, with magnitude $|u||v|\sin(\theta)$ where $\theta$ is the angle between $u$ and $v$. Then it is simple to check, by decomposing one of the vectors into a parallel and an orthogonal part with respect to the other, that $uv = u\cdot v + u\wedge v$, so the product of two vectors is in general a sum of a scalar and a bivector.
By multiplying and adding more vectors, we can form objects consisting of sums of scalars, vectors, and $k$ -vectors up to $k = n$. Such an object is called a multivector.
The resulting system is a called a geometric algebra, and it has many uses in physics, even though many special cases, such as the Dirac algebra in Minkowski space, are called by different names. For more information, you could do worse than check out the Wikipedia entry, or the cheekily titled document Imaginary numbers are not real.
TimoTimo
$\begingroup$ Thanks for your answer. Though I'm not in a position to understand all this now, still it gives a lot of exposure of new and strange ideas. $\endgroup$ – Abhinav Dhawan Aug 23 '17 at 16:05
Not the answer you're looking for? Browse other questions tagged vectors tensor-calculus or ask your own question.
Induced voltage of a conductor in a magnetic field
Is force a contravariant vector or a covariant vector (or either)?
More Vector Product Possibilities?
Geometric definition of the Lorentz inner product
Commutation of vector operators
Confusion about the mathematical nature of Elecromagnetic tensor end the E, B fields
Continuum Mechanics - Skew Tensor
4-Vector Definition
|
CommonCrawl
|
Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation
Xianling Dong1,
Shiqi Xu1,
Yanli Liu1,
Aihui Wang2,
M. Iqbal Saripan3,
Li Li1,
Xiaolei Zhang ORCID: orcid.org/0000-0002-0896-94011 &
Lijun Lu4
Convolutional neural networks (CNNs) have been extensively applied to two-dimensional (2D) medical image segmentation, yielding excellent performance. However, their application to three-dimensional (3D) nodule segmentation remains a challenge.
In this study, we propose a multi-view secondary input residual (MV-SIR) convolutional neural network model for 3D lung nodule segmentation using the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) dataset of chest computed tomography (CT) images. Lung nodule cubes are prepared from the sample CT images. Further, from the axial, coronal, and sagittal perspectives, multi-view patches are generated with randomly selected voxels in the lung nodule cubes as centers. Our model consists of six submodels, which enable learning of 3D lung nodules sliced into three views of features; each submodel extracts voxel heterogeneity and shape heterogeneity features. We convert the segmentation of 3D lung nodules into voxel classification by inputting the multi-view patches into the model and determine whether the voxel points belong to the nodule. The structure of the secondary input residual submodel comprises a residual block followed by a secondary input module. We integrate the six submodels to classify whether voxel points belong to nodules, and then reconstruct the segmentation image.
The results of tests conducted using our model and comparison with other existing CNN models indicate that the MV-SIR model achieves excellent results in the 3D segmentation of pulmonary nodules, with a Dice coefficient of 0.926 and an average surface distance of 0.072.
our MV-SIR model can accurately perform 3D segmentation of lung nodules with the same segmentation accuracy as the U-net model.
The American Cancer Society estimated that, in 2018, lung cancer remains the leading cancer type in 1.73 million new cancer patients, and hundreds of thousands of patients die of lung cancer every year [1]. CT is the most commonly used modality in the management of lung nodules and automatic 3D segmentation of nodules on CT will help in their detection and follow up. Accurate segmentation and positioning of computer-assisted 3D lung nodules can help the discovery and treatment of lung nodules and prerequisites for liver and tumor resection [2]. Recent research has shown that convolutional neural networks (CNNs) can automatically learn the characteristics of medical images, and thus can be applied in segmenting medical images with high accuracy [3,4,5]. Manual marking of each patient's lesion location by physicians and radiologists is generally accepted as the gold standard for medical image segmentation. However, because the number of 3D image slices generally reaches up to several hundreds, the calibration process is time consuming and experts face immense workload due to shortage of experienced physicians and radiologists. Moreover, With the continuous development of medical technology, people are more and more concerned about their health, resulting in a significant increase in the number of CT every year. The burdens of doctors and radiologists are getting heavier, and patients have to wait longer for results, which is not conducive to the healthy development of medical and health services. The development of computer-aided intelligent segmentation classification of 3D medical images improves the processing speed of medical images, enhances the accuracy of diagnosis of diseases by doctors, and reduces the burden of physicians and radiologists [6]. The combination of artificial intelligence deep learning and medical image 3D segmentation can more accurately perform 3D segmentation of lung nodules, which is helpful for doctors to find and follow up lung nodules. CNNs have currently made great progress in 2D segmentation of medical images, but their application in 3D segmentation is still a challenging task. The reasons for this difficulty are as follows. First, the learning process of CNNs requires a large amount of 3D medical image data and their ground truths to produce good prediction results; however, there is still a lack of such large amount of data [7, 8]. Second, the class balance between negative and positive samples in a 3D dataset is a challenge. In general, there are far more negative non-nodular samples than positive nodular ones. For example, in lung CT images, some lung nodules are only 3–5 mm in diameter with extremely low volume [9]. Therefore, if a deep-learning CNN is provided with sufficient training data and a better class balance, the loss function of the CNN can be easily minimized and a good model can be effectively trained [10]. 3D CNNs consume considerable amount of computing resources, such as graphic cards and memory, during training. In the model prediction process, the trained network requires high hardware requirements and has certain restrictions on the promotion of its application. Therefore, the algorithm needs to be optimized to render it simple and dexterous such that it is more conducive to 3D medical image segmentation tasks [11].
In this study, we propose a multi-view secondary input residual (MV-SIR) model for 3D segmentation of pulmonary nodules in chest CT images. We extract lung nodules into voxel cubes, adding 10 pixels in each of the six directions of the nodule to include additional non-nodule tissues inside. After that, extract a certain amount of the voxel points in the lung nodule part, and extract equal numbers of voxel point in the expansion part to balance the positive and negative samples. In the lung nodule cube, scale patches in the axial, coronal, and sagittal views are extracted centered on randomly selected voxel points. Selecting a part of the voxel points randomly in the lung nodule cube can easily and efficiently capture most of the image features of the lung nodules, avoiding that most voxel points are too close together, which causes the extracted patches to be too similar and data redundancy. Each view extracts voxel heterogeneity (VH) features and shape heterogeneity (SH) features. The density and shape of tumor tissue are quite different from those of normal tissue, and there is a high correspondence between the judgment of nodules and their heterogeneity. In CT images, VH reflects gray-scale heterogeneity and tissue density information, and SH reflects tissue shape information. And then we construct an SIR submodel for feature learning for two patches of each view; thus, six submodels are constructed. Then, we integrate the six SIR submodels into the MV-SIR model and learn whether the patches extracted at each point in the cube belong to the pulmonary nodules. Overall, the proposed MV-SIR model has the following contributions:
To the best of our knowledge, it is the first time a combination of the secondary input and residual block is added to a CNN model of segmentation of CT images of 3D pulmonary nodules. This combination can provide reference for the application of the CNN model in medical image classification and segmentation tasks.
Using multi-view (axial, coronal, and sagittal) and multi-image (VH and SH) features as input to the MV-SIR model, full feature extraction can be performed on 3D CT medical images, which improves the accuracy of 3D lung nodule segmentation.
Integration of six SIR submodels from three views to one model improves the performance of the model. The model thus constructed has faster prediction speed and consumes lower equipment computing power than the 3D segmentation model of convolutional kernels.
In recent years, an increasing number of studies have developed artificial intelligence deep learning CNN tools in the field of medical image segmentation classification [12, 13]. In 2D CNN models, a 3D medical image is sliced into 2D images for feature learning, and then 3D medical image segmentation is performed on the basis of the prediction result of the 2D CNN model [14,15,16]. Wang et al. captured detailed texture and nodule shape information using a scale patch strategy as the input to the MV-CNN and obtained segmentation results with an average surface distance (ASD) of 0.24 [17]. Xie et al. decomposed 3D nodules into nine fixed views to learn the characteristics of 3D pulmonary nodules, and the segmentation result of the model had an accuracy of 91.60% [18]. Another method treats a 3D image as a series of 2D slices and learns the 2D slices through a CNN model to segment the image [19]. Christ et al. serially connected two fuzzy neural network (FNN) models as the region-of-interest (ROI) input of the second FNN, and segmented the liver and its lesions. The results showed that the liver and lesion segmentation of the model had a Dice score of greater than 94% [20]. Tomita et al. extracted the radiological features of each CT image using a deep CCN and integrated them into an evaluation system. The segmentation results had an accuracy of 89.2% [21]. Furthermore, Ronneberger et al. used a U-net model to achieve high-speed end-to-end training with limited images, which provided excellent segmentation results [22]. Jonathan et al. established a "completely convolved" network that accepts inputs of any size and produces outputs of corresponding size through effective reasoning and learning [23]. In another method, 3D volume segmentation of medical images is directly performed by inputting 3D medical images into a 3D depth learning model to learn [24,25,26,27]. For volumetric image segmentation, Çiçek et al. introduced a 3D U-net model, which learns from sparsely annotated volumetric images [28]. Milletari et al. proposed a 3D image segmentation method, V-net, based on a volumetric fully convolutional neural network, to achieve end-to-end training and learning, which enable prediction of the entire volume [29].
The implementation of the proposed MV-SIR model involves the following procedures: (1) Extract lung nodule cubes from the Lung Image Database Consortium (LIDC) and Image Database Resource Initiative (IDRI) (LIDC-IDRI) CT dataset and extract patches from the three views by taking a voxel point in the cube as the center. (2) Extract VH and SH features from the slices of lung nodules. (3) Build the SIR submodel and train it with the patches extracted from the three views. (4) Combine the six branches of the lung nodule into the MV-SIR model, obtain the training results, and perform 3D reconstruction on the image segmentation results.
Dataset and multi-view patch extraction
The LIDC-IDRI dataset was collected by the National Cancer Institute to study early cancer detection in high-risk populations. The LIDC-IDRI dataset is composed of chest medical image files (such as CT images, X-ray films) and corresponding diagnostic result lesions. A total of 1018 research samples were included in the dataset. For each of the images in the sample, a two-stage diagnostic labeling was performed by four experienced chest radiologists. In the first stage, each radiologist independently diagnosed and labeled the patient's nodule location, categorized as follows: 1) > =3 mm nodules, 2) < 3 mm nodules, and 3) > =3 mm non-nodules. In the second stage, each radiologist independently reviewed the comments of the remaining three radiologists and determined that there were no errors, and then gave their own final marking results. The results of the four radiologists are recorded in LIDC-IDRI. In this paper, we use the average results of the four radiologists as the marked area of the lung nodules. Such a two-stage annotation can mark all results as completely as possible while avoiding forced consensus. We selected a total of 874 clearly marked lung nodules, 600 lung nodules were used for model training and validation, of which the validation set accounted for 10%, and 274 lung nodules were used for model testing. All study samples were processed in the same way, so use LIDC-IDRI-0001 as an example., which is a matrix of 133 × 512 × 512, a total of 133 slices, each of size 512 × 512. According to the spatial resolution of the chest CT scan, we resampled the pixel values into voxels based on the standard size of 1.0 mm × 1.0 mm × 1.0 mm, and finally obtained the voxel cube of 133 × 512 × 512 mm3 to complete the 3D reconstruction of CT images [30]. We extracted the nodules from the entire CT image based on the center position of the nodule and the ROI provided by the radiologists. We prepared a lung nodule cube consisting only of voxel grayscale values and added 10 voxels in the six directions of the cube, namely, top side 、bottom side、front side 、back side、left side、right side to balance the class between negative and positive samples. Although the obtained lung nodules have different cube sizes, the 2D patches we extracted are the same and uniform, which does not affect the training of our model. We extracted multi-view patches centered on a random voxel in the lung nodule cube from axial, coronal, and sagittal views. Research indicates that the best slice size is 30 × 30 [21]. Figure 1 presents the voxel points randomly selected as the center and 30 × 30 slices extracted around them in the axial, coronal, and sagittal views.
Structure of the MV-SIR model and the model training process
VH and SH extraction
As 3D image segmentation needs to be converted into 2D image classification, we must classify the patches extracted from the lung nodule cubes into nodules and non-nodules before the model is used for classification training. Based on the ROI marked by the radiologist on the CT image, we obtain a polygon with the lung nodule boundaries. To judge whether a patch belongs to the lung nodule, we only need to distinguish whether the randomly located patch point is inside the polygon. This is determined by the ray method wherein a ray is drawn from the center point and the number of intersections the ray makes with the boundaries of the polygon is calculated. If the number of intersection points is odd, the point is inside the polygon, otherwise the point is outside the polygon. The patches that belong to the pulmonary nodules are denoted as 1, while those that do not belong to the pulmonary nodules are denoted as 0. On each slice, 4000 patches are extracted, with each lung nodule obtaining a patch of 4000 × m; the total number of patches is obtained as
$$ \boldsymbol{Patches}=\sum \limits_{\boldsymbol{i}=\mathbf{1}}^{\mathbf{n}}\mathbf{4000}\ast {\boldsymbol{m}}^{\left(\boldsymbol{i}\right)}, $$
where m represents the number of layers of lung nodules and n is the total number of lung nodules extracted. In this way, we can select one quarter to one half of the pixels in the lung nodule cube, and the extracted 2D patch can contain most of the information of the lung nodule.
As Fig. 1 shows, the extraction of VH and SH features is based on the voxel values of the CT images and the ROI calibrated by the radiologists. VH is represented by the difference in the grayscale values of the voxels and can be directly obtained from the lung nodule cube. SH is represented by different shapes. We convert the voxel grayscale value image into a binary image based on the ROI, which better reflects its shape feature.
SIR submodel
SIR submodels are composed of two residual blocks and one secondary input block and are connected with several fully connected layers, pooled layers, and convolution layers. As shown in Fig. 2, the 30 × 30 image is first input into convolutional layer C1 and pooled layer P2. Convolutional layer C1 contains 32 × 3 × 3 convolution kernels, and 30 × 30 × 32 feature maps are obtained. Then, the feature map is input into P2 with 2 × 2 kernels and a step size of 2 × 2; 15 × 15 × 32 characteristic maps are obtained.
Structure of the MV-SIR submodel, and the submodel training process
Next, in an identical residual block, the upper path is the "shortcut path" and the lower path is the "main path". The "upper path" belongs to the shortcut of the residual block, The "main path" is the main structure of the model. The "main path" includes the first convolution layer with a filter size of 1 × 1, a step size of 1 × 1, and padding = "valid" is no fill convolution; the second convolution layer with a filter size of 3 × 3, a step size of 1 × 1, and padding = "same" is the same convolution; and the third convolution layer with a filter size of 1 × 1, a step size of 1 × 1, and padding = "valid" is no fill convolution. The shortcut path is to input information by shortcut to the module with the "Layeradd" function, and then the ReLU activation function is applied. The constant residual block input and output are the same, so a 15 × 15 × 32 feature map is obtained. The residual block protects information integrity by directly passing the input information to the output. The entire network only needs to learn the input and output differences, thus simplifying the learning objectives and complexity. This process is conducive to improve the efficiency of CNN learning; the specific improvement principle will be analyzed in detail in the discussion.
The secondary input is made via another path. After the original image is passed through a few convolution operations, it is stitched to the output of the first residual block using the concatenate function. Note that, as shown in Fig. 2, the "Layeradd" function is different from the "Layerconcatenate" function. The former directly adds the value of one matrix to another matrix, and the resulting matrix dimensions are unchanged, although the values of the matrix change. The latter function changes the dimensions of a matrix by splicing one matrix onto another, keeping the values of the matrix unchanged. The 30 × 30 image is input into convolutional layer EC1 and pooled layer EP2, and 15 × 15 × 32 secondary characteristic maps are obtained. After splicing of the matrix using the "Layerconcatenate" function, we obtain a 15 × 15 × 64 feature map matrix as the input to the subsequent layer.
Next, the image is input to pooling layer P3 and convolution layer C4. Because our image size is small, in order to better preserve the integrity of the image information, the image is input again into a constant residual block and a pooling layer, and an 8 × 8 × 128 feature graph matrix is obtained. Finally, the image is sequentially input to two 1 × 1 × 256 fully connected layers, F7 and F8. This completes the construction of our secondary input residual (SIR) submodel.
MV-SIR model
As shown in Fig. 1, the MV-SIR model is composed of six SIR submodels. The submodel inputs are VH and SH patches from axial, coronal, and sagittal views. In each lung nodule cube, 6 × 4000 × m patches are input to the MV-SIR model. We extracted 600 pulmonary nodules for training 274 lung nodules for testing. The total number of patches is given:
$$ \boldsymbol{All}-\boldsymbol{Patches}=\sum \limits_{\boldsymbol{i}=\mathbf{1}}^{\mathbf{674}}\mathbf{4000}\ast \mathbf{6}\ast {\boldsymbol{m}}^{\boldsymbol{i}}, $$
A fully connected layer fuses all submodels and is connected to the classification layer of a neuron. The activation function of the output layer of a neuron is a two-class problem, so we use the classical Sigmoid function, given as follows:
$$ \boldsymbol{\mathsf{\boldsymbol{\delta}}}\left(\boldsymbol{z}\right)=\frac{\mathbf{1}}{\mathbf{1}+{\boldsymbol{e}}^{-\boldsymbol{z}}}\in \left[\mathbf{0},\mathbf{1}\right],\boldsymbol{z}\in \left(-\infty, +\infty \right), $$
where z is the output of the model. For the loss function, binary_cross_entropy, which is a binary entropy class, is selected. There are only two types, 0 or 1, which can overcome the problem that the variance cost function update weight is too slow [31]. The loss function L is given by the following formula:
$$ \boldsymbol{L}=-\frac{\mathbf{1}}{\boldsymbol{n}}\sum \limits_{\boldsymbol{i}=\mathbf{1}}^{\boldsymbol{n}}\left[{\boldsymbol{y}}^{\left(\boldsymbol{i}\right)}\boldsymbol{\log}{\hat{\boldsymbol{y}}}^{\left(\boldsymbol{i}\right)}+\left(\mathbf{1}-{\boldsymbol{y}}^{\left(\boldsymbol{i}\right)}\right)\boldsymbol{\log}\left(\mathbf{1}-{\hat{\boldsymbol{y}}}^{\left(\boldsymbol{i}\right)}\right)\right], $$
Here, y(i) is the true result of the calibration and \( {\hat{\boldsymbol{y}}}^{\left(\boldsymbol{i}\right)} \) is the model prediction result. We use the adaptive learning rate optimization method Adam to calculate the adaptive learning rate of each parameter. Practical application of Adam has demonstrated that it is better than other adaptive learning methods in that it is simple to implement, is efficient in calculation, consumes less memory, is extremely interpretative, and usually requires no adjustments or only minor fine-tuning as well as the parameter update in the algorithm is not affected by gradient transformation [32]. The learning rate and weight decay are 0.0001 and 0.01, respectively, and the batch size is 2000.
Figure 3 presents the 3D segmentation prediction by the MV-SIR model; it shows the lung nodule cubes of the test set, the patches prepared point by point, and the recorded position of each voxel. The model predicts whether a patch is within a lung nodule, and the predicted value of each voxel is rearranged according to the position. Subsequently, the threshold image is binarized to obtain a mask of the segmented image; this mask is overlaid onto the original image to complete the pulmonary nodule and the 3D segmentation of the image. In this way, our MV-SIR model can obtain VH and SH of medical image features; their shallow, middle, and deep layer information; and information of different views for comprehensive judgment. Thus, effective improvement of image recognition and segmentation and enhancement of 3D segmentation accuracy can be realized.
Graph of the prediction confidence matrix, where the arrow points to the predicted value of a single voxel point
In order to evaluate the proposed model, the results predicted by the model are compared with the ground truth in terms of the metrics, the Dice coefficient, average surface distance (ASD), and Hausdorff distance (HSD). In addition, we measure the sensitivity (SEN) and positive predictive value (PPV) to determine the ability of the model to segment the ROI in the segmentation experiment. These metrics are calculated by the following formulas:
$$ \boldsymbol{DICE}=\frac{\mathbf{2}\ast \left({\boldsymbol{V}}_{\boldsymbol{seg}}\boldsymbol{and}\ {\boldsymbol{V}}_{\boldsymbol{gt}}\right)}{\left({\boldsymbol{V}}_{\boldsymbol{seg}}+{\boldsymbol{V}}_{\boldsymbol{gt}}\right)}, $$
$$ \boldsymbol{PPV}=\frac{\boldsymbol{V}\left(\boldsymbol{Gt}\cap \boldsymbol{Seg}\right)}{\boldsymbol{V}\left(\boldsymbol{Seg}\right)}, $$
$$ \boldsymbol{SEN}=\frac{\boldsymbol{V}\left(\boldsymbol{Gt}\cap \boldsymbol{Seg}\right)}{\boldsymbol{V}\left(\boldsymbol{Gt}\right)}, $$
$$ \boldsymbol{HSD}=\boldsymbol{\max}\left\{{\boldsymbol{\sup}}_{\boldsymbol{x}\boldsymbol{\epsilon } \boldsymbol{X}}{\boldsymbol{\operatorname{inf}}}_{\boldsymbol{y}\boldsymbol{\epsilon } \boldsymbol{Y}}\ \boldsymbol{d}\left(\boldsymbol{x},\boldsymbol{y}\right),{\boldsymbol{\sup}}_{\boldsymbol{y}\upepsilon \boldsymbol{Y}}{\boldsymbol{\operatorname{inf}}}_{\boldsymbol{x}\boldsymbol{\epsilon } \boldsymbol{X}}\ \boldsymbol{d}\left(\boldsymbol{x},\boldsymbol{y}\right)\right\}, $$
$$ \boldsymbol{ASD}=\frac{\mathbf{1}}{\mathbf{2}}\left({\boldsymbol{mean}}_{\boldsymbol{i}\boldsymbol{\epsilon } \boldsymbol{Gt}}{\boldsymbol{\min}}_{\boldsymbol{j}\boldsymbol{\epsilon } \boldsymbol{seg}}\ \boldsymbol{d}\left(\boldsymbol{x},\boldsymbol{y}\right)+{\boldsymbol{mean}}_{\boldsymbol{i}\boldsymbol{\epsilon } \boldsymbol{seg}}{\boldsymbol{\min}}_{\boldsymbol{j}\boldsymbol{\epsilon } \boldsymbol{Gt}}\boldsymbol{d}\left(\boldsymbol{x},\boldsymbol{y}\right)\right), $$
Here, Vgt is the calibration ground truth, Vseg is the model segmentation result, and x and y are the coordinates of the midpoint of the image, supxϵXinfyϵYis the shortest distance from a point in a point set to another point set, meaniϵGtminjϵseg is average of the closest distance between two points.
The software used for the implementation of the model is the Keras-gpu 2.2.4 platform developed by Google, and the hardware is Dell Workstation running on Windows®10, executed on Inter(R)Xeon(R) Gold 6130 CPU @2.10 GHZ (16 cores), with a 256 GB RAM, and a NVIDIA Quadro P5000 GPU.
After 100 epochs of training, the MV-SIR model completely converged. The accuracy of the training set (ACC) and that of the verification set (Val_ACC) reached 99.10 and 98.91%, respectively. The loss of the training set (loss) and the verification set (Val_loss) decreased to 0.0321, and 0.0318, respectively.
Figure 4 indicates that the ACC values of the training set and the verification set increase rapidly 30 epochs after the training starts, then this increase rate reduces; finally, after 100 epochs, ACC remains constant. The same trend is observed for the loss values as well. These observations indicate that our MV-SIR model fully converges after 100 epochs, and a high ACC is achieved. The MV-SIR model generally takes only 2 h to complete the training process, its best performing training steps require only 100 epochs, and the prediction process is completed within 5 min; thus, the segmentation efficiency of the model is improved.
Learning curve of the MV-SIR model
Comparison of model structures
We analyzed the effect of different model structures on the 3D segmentation performance. For this analysis, we designed three model structures: the traditional multi-view input CNN (MV-CNN) model, the multi-view input residual block CNN (MV-I-CNN) model, and our MV-SIR model. The results indicate that with the improvement of the model structure, the 3D segmentation performance improves.
Figure 5 presents the segmentation effect maps of the 2D slices obtained from the 3D segmentation results from the different models. Note that in the segmented lung nodule image predicted by the MV-CNN model, the internal pulmonary nodules are incomplete and the external image exceeds the lung nodule boundary, indicating the low accuracy of the model prediction and the presence of high false negatives and positives. The MV-I-CNN model performs better, but there are still a certain number of false positives. By contrast, our model achieves a satisfactory 3D segmentation effect. Table 1 presents the comparison of the 3D segmentation performances of our model and other models in terms of the metrics Dice, ASD, HSD, PPV, ACC, and SEN. The values indicate good performance of the MV-SIR model in terms of Dice, SEN compared to the other two models. In particular, the Dice value is 0.926, nearing the current high level in the 3D medical image segmentation industry and consistent with the result of 3D U-net medical image segmentation [28]; other parameters have a similar trend. In summary, we can conclude that the secondary input original image and residual block positively contribute to the improvement of model segmentation performance.
Comparison of model structures and different inputs. Columns from left to right are the original CT image, radiologist marker image, MV-SIR result, and 2D segmentation effect map. The top to bottom rows sequentially present the results of the MV-CNN, MV-I-CNN, the MV-SIR with VH input, the MV-SIR with SH input, the MV-SIR with combined VH and SH input, and secondary input MV-SIR. The 2D segmentation map can intuitively show that our model has achieved the best segmentation effect
Table 1 Comparison of the 3D segmentation performances of different model structures
Comparison of different inputs
Figure 5 also presents the comparison of the 3D segmentation performances of the MV-SIR model with different inputs, that is, VH features alone, SH features alone, VH and SH combined, and VH and SH combined along with the secondary input. It is noted that VH and SH together as input effectively improve the performance of medical image segmentation. Nevertheless, our secondary input model performs the best, indicating that the secondary input can significantly improve the 3D segmentation effect of the model.
In general, when the VH or SH features alone are input in the MV-SIR model, the image segmentation effect map in the 2D slice from the 3D segmentation result is incomplete, and the non-pulmonary nodules are identified as lung nodules. However, with VH and SH together as the input to the MV-SIR model, the apparent segmentation effect is considerably improved, with decreased false negatives and false positives of the predicted results. Moreover, the segmentation performance is the best in case of VH and SH together as the input along with the secondary input to the MV-SIR model.
Table 2 indicates that using VH or SH features alone as the MV-SIR model input results in the main disadvantage that more false positives appear in the prediction results. From Table 2, we can draw the following conclusions in terms of the Dice, ASD, HSD, PPV, and SEN indicators: the MV-SIR model performs the best in 3D segmentation, and the results of comparing different inputs prove that the secondary input can improve the accuracy of the model 3D segmentation.
Table 2 Comparison of the 3D segmentation performances of the model with different inputs
Receiver operating characteristic curve (ROC) and model performance
To further confirm the effectiveness of our MV-SIR model in improving the 3D segmentation performance, we draw ROC curves for models with different inputs and different structures. In Section 3.4, we mentioned that in the process of model prediction and 3D reconstruction of segmentation, we need to choose an optimal threshold for binarization of reconstructed images. The ROC curve is a powerful tool to study the generalization performance of deep learning from the perspective of threshold selection. The value of the point closest to the upper left corner is the optimal threshold. The ROC curves for each model are plotted to the same coordinates to visually identify the pros and cons of the model. The model represented by the ROC curve near the upper left corner has the highest accuracy.
Figures 6 and 7 present seven ROC curves drawn in two graphs, where the positive error ratio is the abscissa and the correct discipline is the ordinate. The figure indicates that the MV-SIR model performs better in 3D image segmentation than MV-CNN and MV-I-CNN. The optimal threshold of the MV-SIR model is less than those of the other two models. The result shows that the prediction results obtained by the MV-SIR model are relatively high. Figure 6 confirms the same conclusion that the MV-SIR model performs the best in medical image segmentation when considering the four different input models. In addition, the confidence of the prediction results obtained by the model is the highest.
ROC curves of MV-CNN, MV-I-CNN, and MV-SIR
ROC curves of MV-SIR with different inputs, namely, VH, SH, VH and SH together, and the secondary input MV-SIR
Table 3 presents the results of the comparison of our model with other models. Our model achieves better performance in terms of Dice, SEN, PPV, HSD, and ASD. The Dice value of our model is comparable to that of the classic 3D U-net model, while other parameter values somewhat exceed the values of the U-net model.
Table 3 Comparison of the 3D segmentation performances of our model and other models
Figure 8 presents the results of 3D reconstruction of the original CT image, the GT map of the expert calibration, and the prediction map of our model. The 3D segmentation predicted by our model is very close to the GT map of the expert calibration, which intuitively implies that our model has achieved superior results in 3D segmentation of pulmonary nodules.
MV-SIR model 3D segmentation result. Top left: the lung nodule original image; top right: ground truth (GT) map of expert calibration; and bottom: the MV-SIR prediction map
We use the QIN LUNG CT public data set to test our model. The computed tomography (CT) image data of this data set comes from patients diagnosed with non-small cell lung cancer (NSCLC). We are very pleased to see that our model has an average DICE of 0.920 ± 0.027 in 47 cases on this dataset. It shows that our model has good segmentation results for different data sets in the segmentation of lung nodules.
3D medical image segmentation has always been a challenging task. Our goal is to improve the accuracy and confidence of 3D medical image segmentation to assist physicians in clinical diagnosis and treatment. In this study, we proposed the MV-SIR model to improve the performance of medical image 3D segmentation.
By presenting the MV-SIR with color scale patches extracted around a particular pixel, the CNN can simply be used to classify each pixel in the image [4]. We extracted the characteristic patches from three perspectives, namely, axial, coronal, and sagittal views, in the lung nodule cube. Multi-view patches help improve image quality and anatomy and extend the field of view [36].
For each patch, we further extracted VH and SH features. As shown in Fig. 1 VH can predominantly learn grayscale value information, whereas SH can predominantly learn boundary information when they are used separately as the input to the model. By combining them as the input to the model, the model learns greater patch information and thus gains more sense of vision. To validate this concept, we compared the performance of the MV-SIR model under different inputs, namely VH, SH, and VH and SH together. We found that VH and SH together as the input to the model yields greatly improved Dice, HSD, SEN, and other parameter values. In addition, the ROC curve clearly indicates the superior 3D segmentation result obtained by the model with the combined VH and SH input.
We believe that multi-view and VH and SH feature maps together as the input yield improved 3D segmentation performance of the model mainly because the model can extract more information of the image, fields of view, and boundaries as well as pixel values, producing excellent mutual effect. This conclusion is consistent with the previous studies [37,38,39].
The difference in the layers of the network implies that different features of different levels can be extracted [40, 41]. The more layers of the network, the deeper feature information extracted from the image. In the proposed model, we include a residual block to the traditional CNN; we skip the three-layer convolution by connecting the input information of P2 and calculate the output of the residual block F(x) + x. This process results in the two matrices being added, with the dimensions of the matrix unchanged. This implies that we add the feature information of the first two layers directly to the subsequent output, and the value of the matrix changes. Characteristic information of different levels can be extracted to a certain extent. We add another residual block to the network and input the image the second time, but in another path. Then, after only one convolution and pooling, the obtained feature matrix is spliced to the fast residual output. Note that we use a matrix splicing function, where the values in the matrix remain unchanged, while the size of the matrix becomes twice as large.
Different characteristics of different network layers can be obtained at the final fully connected layer. The shallow information and the deep information are used together as the basis for judgment in our 3D image segmentation, thus improving the performance of 3D segmentation by our model.
Further, we designed three network structure models: MV-CNN, MV-I-CNN, and MV-SIR. The results of segmentation indicators such as Dice, HSD, SEN, and segmentation rendering all indicate that the segmentation effect of the proposed model is superior, in turn, confirming the validity of our concepts based on which the model has been designed. Moreover, the ROC curves obtained from our model and the previous models as well as our model with different inputs confirm that the MV-SIR model achieves superior performance in 3D medical image segmentation. In future work, we hope to design a network model with multiple iterations to further validate our concepts.
One challenge is how to apply our model to real-world CT images. we hope to expand the cube containing lung nodules to a whole 3D volume. In the training process, a larger amount of calculation is required to complete the automatic 3D segmentation of the whole 3D volume. Another feasible solution is that the doctor calibrates the position of the lung nodule cube to assist our model in 3D segmentation.
In this study, we provide a well-structured deep learning model MV-SIR for 3D segmentation of pulmonary nodules. Our model consists of six SIR submodels, each of which adds two fast residual blocks and one secondary input module to the traditional CNN. From the LIDC-IDRI dataset, 19 million patches were extracted from 600 lung nodules used for model training and 274 lung nodules used for model testing. The test results indicate that the MV-SIR model achieved excellent performance in 3D pulmonary nodule segmentation, with a Dice of 0.926 and an ASD of 0.072. In future work, we plan to include more repeated inputs in the model, and test the segmentation performance of the MV-SIR model on different datasets.
The dataset(s) supporting the conclusions of this article is (are) available in the public Research Data Deposit platform (https://www.cancerimagingarchive.net/).
CNNs:
Convolutional neural networks
MV-SIR:
Multi-view secondary input residual
LIDC-IDRI:
Lung image database consortium and image database resource initiative
VH:
Voxel heterogeneity
Shape heterogeneity
Secondary input residual
FNN:
Fuzzy neural network
Region-of-interest
Average surface distance (ASD)
HSD:
Hausdorff distance
SEN:
PPV:
positive predictive value
Accuracy of the training set
Val_ACC:
Accuracy of the verification set
loss:
The loss of the training set
Val_loss:
The loss of the verification set
MV-CNN:
The traditional multi-view input CNN
MV-I-CNN:
The multi-view input residual block CNN
Receiver operating characteristic curve
GT:
Milroy MJ. Cancer Statistics: Global and National 2018; https://doi.org/10.1007/978-3-319-78649-0:29-35.
Heimann T, Meinzer HP. Statistical shape models for 3D medical image segmentation: a review. Med Image Anal. 2009;13:543–63.
Kleesiek J, Urban G, Hubert A, et al. Deep MRI brain extraction: a 3D convolutional neural network for skull stripping. Neuroimage. 2016;129:460–9.
Litjens G, Kooi T, Bejnordi BE, et al. A survey on deep learning in medical image analysis. Med Image Anal. 2017;42:60–88.
Dou Q, Yu L, Chen H, et al. 3D deeply supervised network for automated segmentation of volumetric medical images. Medical image analysis 2017;41:40-54.
Shen D, Wu G, Suk HI. Deep learning in medical image analysis. Annu Rev Biomed Eng. 2017;19:221–48.
Zheng Y, Liu D, Georgescu B, et al. 3D deep learning for efficient and robust landmark detection in volumetric data. International Conference onMedical Image Computing and Computer-Assisted Intervention: Springer, 2015; pp. 565-572. https://doi.org/10.1007/978-3-319-24553-9_69.
Meyer P, Noblet V, Mazzara C, et al. Survey on deep learning for radiotherapy. Computers in biology and medicine 2018;98:126-46.
Han F, Zhang G, Wang H, et al. A texture feature analysis for diagnosis of pulmonary nodules using LIDC-IDRI database. 2013 IEEE International Conference on Medical Imaging Physics and Engineering: IEEE, 2013; pp. 14-18. https://doi.org/10.1109/ICMIPE.2013.6864494.
Deng L, Yu D. Deep learning: methods and applications. Foundations and trends in signal processing 2014;7:197-387.
Jiang Z, Liu Y, Chen H, et al. Optimization of Process Parameters for Biological 3D Printing Forming Based on BP Neural Network and GeneticAlgorithm. ISPE CE, 2014; pp. 351-358. https://doi.org/10.1016/j.triboint.2008.06.002.
Vazquez-Reina A, Gelbart M, Huang D, et al. Segmentation fusion for connectomics. 2011 International Conference on Computer Vision: IEEE, 2011; pp. 177-184. https://doi.org/10.1109/ICCV.2011.6126240.
Srivastava N, Salakhutdinov RR. Multimodal learning with deep boltzmann machines. Advances in neural information processing systems, 2012; pp.2222-2230. https://doi.org/10.1162/NECO_a_00311.
Havaei M, Davy A, Warde-Farley D, et al. Brain tumor segmentation with deep neural networks. Medical image analysis. 2017;35:18-31.
Moeskops P, Viergever MA, Mendrik AM, et al. Automatic segmentation of MR brain images with a convolutional neural network. IEEE Trans Med Imaging. 2016;35:1252–61.
Roth HR, Lu L, Farag A, et al. Deeporgan: Multi-level deep convolutional networks for automated pancreas segmentation. International conference onmedical image computing and computer-assisted intervention: Springer, 2015; pp. 556-564. https://doi.org/10.1007/978-3-319-24553-9_68.
Wang S, Zhou M, Gevaert O, et al. A multi-view deep convolutional neural networks for lung nodule segmentation. 2017 39th Annual InternationalConference of the IEEE Engineering in Medicine and Biology Society (EMBC): IEEE, 2017; pp. 1752-1755. https://doi.org/10.1109/EMBC.2017.8037182.
Xie Y, Xia Y, Zhang J, et al. Knowledge-based collaborative deep learning for benign-malignant lung nodule classification on chest CT. IEEE Trans Med Imaging. 2019;38:1.
Brosch T, Yoo Y, Tang LY, et al. Deep convolutional encoder networks for multiple sclerosis lesion segmentation. International Conference on MedicalImage Computing and Computer-Assisted Intervention: Springer, 2015; pp. 3-11. https://doi.org/10.1007/978-3-319-24574-4_1.
Christ PF, Ettlinger F, Grün F, et al. Automatic liver and tumor segmentation of CT and MRI volumes using cascaded fully convolutional neuralnetworks. arXiv preprint arXiv:170205970. 2017.
Tomita N, Cheung YY, Hassanpour S. Deep neural networks for automatic detection of osteoporotic vertebral fractures on CT scans. Computers inbiology and medicine 2018;98:8-15.
Ronneberger O, Fischer P, Brox T. U-net: Convolutional networks for biomedical image segmentation. International Conference on Medical image computing and computer-assisted intervention: Springer, 2015; pp. 234-241. https://doi.org/10.1007/978-3-319-24574-4_28.
Long J, Shelhamer E, Darrell T. Fully convolutional networks for semantic segmentation. IEEE Trans Pattern Anal Mach Intell. 2014;39:640–51.
Fan L, Xia Z, Zhang X, et al. Lung nodule detection based on 3D convolutional neural networks. 2017 International Conference on the Frontiers andAdvances in Data Science (FADS): IEEE, 2017; pp. 7-10. https://doi.org/10.1109/FADS.2017.8253184.
Zhao C, Han J, Jia Y, et al. Lung nodule detection via 3D U-Net and contextual convolutional neural network. 2018 International Conference onNetworking and Network Applications (NaNA): IEEE, 2018; pp. 356-361.
Kaul C, Manandhar S, Pears N. Focusnet: An attention-based fully convolutional network for medical image segmentation. 2019 IEEE 16th International Symposium on Biomedical Imaging (ISBI 2019): IEEE, 2019; pp. 455-458. https://doi.org/10.1109/ISBI.2019.8759477.
Roth HR, Oda H, Zhou X, et al. An application of cascaded 3D fully convolutional networks for medical image segmentation. Computerized Medical Imaging and Graphics. 2018;66:90-9.
Çiçek Ö, Abdulkadir A, Lienkamp SS, et al. 3D U-Net: learning dense volumetric segmentation from sparse annotation. International conference onmedical image computing and computer-assisted intervention: Springer, 2016; pp. 424-432. https://doi.org/10.1007/978-3-319-46723-8_49.
Milletari F, Navab N, Ahmadi S-A. V-net: Fully convolutional neural networks for volumetric medical image segmentation. 2016 fourth internationalconference on 3D vision (3DV): IEEE, 2016; pp. 565-571. https://doi.org/10.1109/3DV.2016.79.
Shen W, Zhou M, Yang F, et al. Multi-crop convolutional neural networks for lung nodule malignancy suspiciousness classification. Pattern Recogn. 2017;61:663–73.
Franke J, Härdle WK, Hafner CM. ARIMA time series models. Statistics of Financial Markets: Springer, 2015; pp. 237-261.https://doi.org/10.1007/978-3-642-54539-9_12.
Heaton J. Ian Goodfellow, Yoshua Bengio, and Aaron Courville: deep learning. Genet Program Evolvable Mach. 2017;19:1–3.
Shahzad R, Gao S, Tao Q, et al. Automated cardiovascular segmentation in patients with congenital heart disease from 3d cmr scans: combining multiatlasesand level-sets. Reconstruction, segmentation, and analysis of medical images: Springer, 2016; pp. 147-155. https://doi.org/10.1007/978-3-319-52280-7_15.
Tziritas G. Fully-automatic segmentation of cardiac images using 3-d mrf model optimization and substructures tracking. Reconstruction, Segmentation,and Analysis of Medical Images: Springer, 2016; pp. 129-136. https://doi.org/10.1007/978-3-319-52280-7_13.
Zeng G, Zheng G. Holistic decomposition convolution for effective semantic segmentation of 3D MR images. arXiv preprint arXiv:181209834. 2018.
Rajpoot K, Grau V, Noble JA, et al. The evaluation of single-view and multi-view fusion 3D echocardiography using image-driven segmentation and tracking. Med Image Anal. 2011;15:514–28.
Xie Y, Yong X, Zhang J, et al. Transferable multi-model Ensemble for Benign-Malignant Lung Nodule Classification on chest CT. Lect Notes Comput Sci. 2017;10435:656–64.
Wei J, Xia Y, Zhang Y. M3Net: A multi-model, multi-size, and multi-view deep neural network for brain magnetic resonance image segmentation. Pattern Recognition. 2019;91:366-78.
Wei S, Mu Z, Feng Y, et al. Multi-scale convolutional neural networks for lung nodule classification. Inf Process Med Imaging. 2015;24:588–99.
Szegedy C, Ioffe S, Vanhoucke V, et al. Inception-v4, inception-resnet and the impact of residual connections on learning. Thirty-first AAAI conference on artificial intelligence. 2017.
Chen C, Qi F. Single image super-resolution using deep CNN with dense skip connections and inception-resnet. 2018 9th International Conference onInformation Technology in Medicine and Education (ITME): IEEE, 2018; pp. 999-1003.
This work was supported by the Educational Commission of Hebei Province of China under Grant No. NQ2018066, Hebei Province introduced foreign intelligence projects, High level talent research start-up fund of Chengde Medical University under Grant No. 201802, and the Guangdong Basic and Applied Basic Research Foundation under Grant No. 2019A1515011104.
Present Address: Department of Biomedical Engineering, Chengde Medical University, Chengde City, Hebei Province, China
Xianling Dong, Shiqi Xu, Yanli Liu, Li Li & Xiaolei Zhang
Department of Nuclear Medicine, Affiliated Hospital, Chengde Medical University, Chengde City, China
Aihui Wang
Faculty of Engineering, Universiti Putra Malaysia, Serdang, Malaysia
M. Iqbal Saripan
School of Biomedical Engineering and Guangdong Provincal Key Laboratory of Medical Image Processing, Southern Medical University, Guangzhou, China
Lijun Lu
Xianling Dong
Shiqi Xu
Yanli Liu
Xiaolei Zhang
XD, SX: Designed the study; YL, AW: Literature screening; MS, LL: Write and revise; XZ, LL programming. All authors read and approved the final manuscript.
Correspondence to Xiaolei Zhang or Lijun Lu.
All the authors agreed for the publications.
Dong, X., Xu, S., Liu, Y. et al. Multi-view secondary input collaborative deep learning for lung nodule 3D segmentation. Cancer Imaging 20, 53 (2020). https://doi.org/10.1186/s40644-020-00331-0
Multi-view
Medical image
Three-dimensional segmentation
Secondary input
Residual block
|
CommonCrawl
|
These problems can be solved using the main ideas of Chapters 16 through 18. The Basics set will remind you of the fundamental concepts and some typical calculations. The rest are for you to further develop your problem solving skills and your fluency with the notation and ideas.
You can leave answers in terms of the standard normal cdf $\Phi$. You can also leave answers in terms of the Gamma function $\Gamma$ unless you are asked for a simplification.
1. Let $X$ have density $f_X(x) = 2x$ for $0 < x < 1$. Find the density of
(a) $5X - 3$
(b) $4X^3$
(c) $X/(1+X)$
2. Let $X$ have density $f(x) = 2x^{-3}e^{-x^{-2}}$ on the positive real numbers. Find the density of $X^4$.
3. For a fixed $\alpha > 0$ let $X$ have the Pareto density given by
$$ f(x) ~ = ~ \frac{\alpha}{x^{\alpha+1}}, ~~ x > 1 $$
Find the density of $\log(X)$. Recognize this as one of the famous ones and provide its name and parameters.
4. A Prob 140 student comes to lecture at a time that is uniformly distributed between 5:09 and 5:14. Independently of the student, the professor begins the lecture at a time that is uniformly distributed between 5:10 and 5:12. What is the chance that the lecture has already begun when the student arrives?
5. For some constant $c$ let $X$ and $Y$ have a joint density given by
$$ f(x, y) ~ = ~ \begin{cases} c(x - y), ~~ 0 < y < x < 1 \\ 0 ~~~~~~~~~~~~~~ \text{otherwise} \end{cases} $$
(a) Draw the region over which $f$ is positive.
(b) Find $c$.
(c) Find $P(X > Y + 0.4)$. Before you calculate, shade the event on the diagram you drew in (a).
(d) Find the density of $X$.
(e) Are $X$ and $Y$ independent?
(f) Find $E(XY)$.
6. Let $X$ and $Y$ have joint density $f$ given by
$$ f(x, y) ~ = ~ \begin{cases} \frac{40}{243}x(3-x)(x-y), ~~~~ 0 < y < x < 3 \\ 0 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ \text{otherwise} \end{cases} $$
Write the each of the following in terms of $f$ but do not simplify the expression. Integrals should not include regions where $f(x, y) = 0$.
(a) $P(Y > 1)$
(b) the conditional density of $X$ given $Y = 1$ (please be clear about the values on which the density is positive)
(c) $E(e^{XY})$
7. Let $S$, $T$, and $R$ be independent standard normal variables. Find the following without using calculus.
(a) the distribution of $5S - 7$
(b) the distribution of $5(S-T) - 7$
(c) $P(S + T > R + 1)$
(d) $E(5S - 6T + 8R - 9)$
(e) $Var(5S - 6T + 8R - 9)$
8. Heights of women in a large population are normally distributed with mean 65 inches and SD 3.5 inches. Two women are picked at random. Find the chance that one of the women is more than an inch taller than the other.
9. Let $X_i$, $1 \le i \le 5$ be i.i.d. exponential $(\lambda)$ lifetimes of electrical components.
(a) Assume that as soon as one component dies it is replaced by another, so that $T = \sum_{i=1}^5 X_i$ is the total lifetime of all five components. Find the distribution, mean, and SD of $T$.
(b) Find the distribution, mean, and SD of the shortest of the five lifetimes.
10. Let $Z$ be standard normal and let $X$ have the gamma $(r, \lambda)$ density. You shouldn't have to do any calculation in this exercise if you've done the reading or remember what was covered in lecture.
(a) For $c > 0$, $cX$ has the $\underline{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}$ distribution.
(b) Fill in the blanks: $Z^2$ has the $\underline{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}$ distribution.
(c) Let $W$ have the normal $(0, \sigma^2)$ density. Fill in the blank: $W \stackrel{d}{=} \underline{~~~~~~~~~}Z$.
(d) Fill in the blank: $W^2$ has the $\underline{~~~~~~~~~~~~~~~~~~~~~~~~~~~~~}$ distribution.
11. Let $X$ be a random variable. Find the density of $X^2$ if $X$ has the uniform distribution on
(a) $(0, 1)$
(b) $(-1, 1)$
(c) $(-1, 2)$
12. Suppose two shots are fired at a target. Assume each shot hits with independent normally distributed coordinates, with the same means and equal unit variances. Let $D$ be the distance between the point where the two shots strike.
(a) Find the distribution of $D^2$.
(b) Find the distribution of $D$.
13. Let $Z$ be standard normal.
(a) Use the change of variable formula to find the density of $1/Z$. Why do you not have to worry about the event $Z = 0$?
(b) A student who doesn't like the change of variable formula decides to first find the cdf of $1/Z$ and then differentiate it to get the density. That's a fine plan. The student starts out by writing $P(1/Z < x) = P(1/x < Z)$ and immediately the course staff say, "Are you sure?" What is the problem with what the student wrote?
(c) For all $x$, find $P(1/Z < x)$.
(d) Check by differentiation that your answer to (c) is consistent with your answer to (a).
14. The joint density of $X$ and $Y$ is
$$ f(x, y) ~ = ~ \begin{cases} 24xy, ~~~ x, y > 0 \text{ and } 0 < x+y < 1 \\ 0 ~~~~~~~~~~~~~ \text{otherwise} \end{cases} $$
(a) Find the density of $X$. Recognize this as one of the famous ones and state its name and parameters.
(b) Without further calculation, find the density of $Y$ and justify your answer.
(c) Are $X$ and $Y$ independent? Why or why not?
(d) Find the conditional density of $X$ given $Y = 0.75$. As always, start with the possible values.
(e) Find $P(X > 0.2 \mid Y = 0.75)$.
(f) Find $E(X \mid Y = 0.75)$.
15. Weights in a population are normally distributed with mean 150 pounds. Just about 5% of the people weigh more than 185 pounds.
(a) Find the SD of the weights. You can leave your answer in terms of the standard normal cdf $\Phi$ if you don't have access to a notebook cell in which to do the numerical calculation.
(b) Two people are drawn at random. Find the chance that their weights are within 10 pounds of each other. If your answer to (a) isn't numerical, you can just call it $\sigma$ for brevity.
16. [This is Pitman 5.rev.22, with his permission]. In Maxwell's model of a gas, molecules of mass $m$ are assumed to have velocity components $V_x$, $V_y,$ and $V_z$ that are independent, with a joint distribution that is invariant under rotation of the three-dimensional coordinate system. Maxwell showed that $V_x$, $V_y$, and $V_z$ must each have the normal $(0, \sigma^2)$ distribution for some $\sigma$. Taking this result for granted:
(a) Find a formula for the density of the kinetic energy
$$ K ~ = ~ \frac{1}{2}mV_x^2 + \frac{1}{2}mV_y^2 + \frac{1}{2}mV_z^2 $$
(b) Find the mean and mode of the energy distribution.
17. Let $U_i$, $1 \le i \le 20$ be i.i.d. uniform $(0, 1)$ variables, and let $U_{(k)}$ be the $k$th order statistic.
(a) What is the density of $U_{(7)}$?
(b) Without integrating the density, find the cdf of $U_{(7)}$.
[Use the idea you used to find the tail probabilities of a sample median (in lab). Draw a line representing the unit interval and put down crosses representing the variables. For $U_{(7)}$ to be less than $x$, how must you distribute the crosses?]
(c) Find the joint density of $U_{(7)}$ and $U_{(12)}$.
18. Annual household incomes in City A have a mean of 68,000 dollars and an SD of 40,000 dollars. Annual household incomes in City B have a mean of 75,000 dollars and an SD of 50,000 dollars.
A random sample of 400 households is taken in City A. Independently, a random sample of 625 households in taken in City B. You can assume that the sampling procedures can be well approximated by sampling at random with replacement.
(a) Is the distribution of annual household incomes in City A approximately normal? Why or why not? What about City B?
(b) Find or approximate the distribution of the average annual household income in the sample from City A. Justify your answer.
(c) Find or approximate the chance that the average annual household income is greater in the City B sample than in the City A sample.
(d) About how large must a sample from City A be so that there is about a 90% chance that the sample average annual household income is within 1000 dollars of the population average?
19. A random variable $X$ has the beta $(2, 2)$ density. Given $X = x$, the conditional distribution of the random variable $Y$ is uniform on the interval $(-x, x)$.
(a) Find $P(Y < 0.2 \mid X = 0.6)$.
(b) Find $E(Y)$.
(c) Find the joint density of $X$ and $Y$. Remember to specify the region where it is positive.
(d) Find $P(X < 0.3, \vert Y \vert < 0.3X)$.
20. As you know, the Gamma function is defined as an integral. It turns out that its numerical values are hard to calculate exactly except in some cases. Here are two of those cases.
(a) If $n$ is an integer, what is the value of $\Gamma(n)$?
(b) Let $Z$ be standard normal. Use the density of $Z^2$ to show that $\Gamma(\frac{1}{2}) = \sqrt{\pi}$.
(c) For odd integer $n$, find $\Gamma(\frac{n}{2})$. [For even $n$, $n/2$ is an integer and you already dealt with that case in Part (a).]
21. Let $X$ and $Y$ be i.i.d. with a joint density.
(a) Find $P(Y > X)$.
(b) Find $P(\vert Y \vert > \vert X \vert)$.
(c) If $X$ and $Y$ are i.i.d. standard normal, find $P(Y > \vert X \vert)$.
22. Two points are placed independently and uniformly at random on the unit interval. This creates three segments of the interval. What is the chance that the three segments can form a triangle?
[To find probabilities of events determined by two independent uniform $(0,1)$ random variables, it's a good idea to draw the unit square.]
23. Let $X_i$, $i = 1,2$ be independent random variables such that $X_i$ has the exponential $(\lambda_i)$ distribution. Suppose $\lambda_1 \ne \lambda_2$.
(a) Use the convolution formula to find the density of the sum $S = X_1 + X_2$.
(b) Show by algebra that the expression you got for the density in Part (a) is non-negative for all positive $\lambda_1 \ne \lambda_2$.
(c) For $i=1, 2$, let $f_i$ denote the exponential density of $X_i$. Show that the density you got in part (a) is equal to $c_1f_1 + c_2f_2$ for two constants $c_1$ and $c_2$ such that $c_1+c_2 = 1$. Are $c_1$ and $c_2$ both positive?
〈 Chi-Squared Distributions Distributions of Sums 〉
|
CommonCrawl
|
Science – Society – Technology
New results on the gravity monitoring (2014–2017) of Soultz-sous-Forêts and Rittershoffen geothermal sites (France)
Nolwenn Portier ORCID: orcid.org/0000-0003-2107-81841,
Jacques Hinderer1,
Umberto Riccardi2,
Gilbert Ferhat1,3,
Marta Calvo4,
Yassine Abdelfettah1 &
Jean-Daniel Bernard1
Geothermal Energy volume 6, Article number: 19 (2018) Cite this article
This article presents the study of the mass redistribution associated with the geothermal energy exploitation of the Soultz-sous-Forêts and Rittershoffen plants by microgravity monitoring in the period 2014–2017. The two plants are located in the eastern part of France in the Rhine Graben. This rift is characterized by thermal anomalies. The Soultz-sous-Forêts enhanced geothermal system is a demonstration site producing 1.7 MWe thanks to three wells 5 km deep. The Rittershoffen geothermal plant is used to produce heat (24 MWth) with two wells of 2 km depth. The most recent production episodes at Soultz-sous-Forets and Rittershoffen began on 24 June 2016 and 19 May 2016, respectively. Each summer, since 2014 for the Soultz-sous-Forêts network and since 2015 for the Rittershoffen network, gravity measurements have been taken with a Scintrex CG5 gravimeter in order to calculate the gravity variation compared to a reference station and a reference time. The stability of the reference station at the Soultz-sous-Forêts plant was investigated by repeated absolute gravity measurements from the FG5#206. Gravity ties with the gravity observatory of Strasbourg were also performed to compensate for the absence of superconducting gravimeter at the in situ reference station. Precise leveling was undertaken simultaneously to each gravity survey showing that vertical ground displacement is lower than 1 cm; hence, this leads us to consider that the detected gravity changes are only due to Newtonian attraction. We do not detect any signal at the Rittershoffen network in the investigated period. After the beginning of production, we noticed a small differential signal at the Soultz-sous-Forêts network, which is spatially associated with the injection and production wells' positions. Furthermore, the maximum gravity value appears in the same area as the induced seismicity related to the preferential paths of the geothermal fluid. However, a simple model based on a geothermal reservoir of cylindrical shape cannot explain the observations in terms of amplitude.
The Soultz-sous-Forêts and Rittershoffen geothermal sites are located in the eastern part of France in the Upper Rhine Graben (Meyer and Foulger 2007). At these geothermal plants, temperature anomalies at the top of the basement (see Fig. 1) reach around 85 °C (Baillieux et al. 2014).
Temperature anomaly in °C at top basement which is around 1400 m MD (Measured Depth) in Soultz-sous-Forêts and around 2200 m MD in Rittershoffen (Aichholzer et al. 2015). Fault traces with their dip directions and boreholes represented by triangles are also shown (From Baillieux et al. 2014)
The Soultz-sous-Forêts research project started in 1987 to extract geothermal energy from Hot Dry Rocks (HDR). Nevertheless, a large volume of brine was discovered and the Soultz-sous-Forêts geothermal site became the first Enhanced Geothermal System (EGS) demonstration site producing electricity in France (Dezayes et al. 2005). The boost concept of EGS (Lu 2018) consists of extracting heat from HDR volumes even where the permeability is naturally low but improvable by means of hydraulic and/or chemical stimulations. Two or more wells are drilled into the HDR and operate as production and injection wells. Heat is exploited using water as working fluid as follows: the HDR is stimulated to produce fractures by the injected fluids, which run through the permeable pathways made in the rock, collecting heat which is then extracted via production wells.
The wells (see Fig. 2) are aligned in the direction of regional maximum horizontal stress N170°E (Valley and Evans 2007). GPK2 is the injection well and GPK3 is the production well. Note that GPK4 was also used as injection well from the 24th of January to the 30th of June 2017. The well depths are almost 5 km with 500 m long open holes (Genter et al. 2010). A temperature of 165 °C allows producing 1.7 megawatts of electrical output (MWe). The most recent geothermal production began on the 23rd of June 2016 after a period of maintenance.
The Soultz-sous-Forêts (yellow circles) and Rittershoffen (blue circles) gravity networks. The station codes are written in red. In particular, the stations 50 and 23 (red circles) are, respectively, the reference station of the Soultz-sous-Forêts and Rittershoffen networks. The two continuous GNSS stations are indicated in green. The projection of GRT1, GPK3, GPK4 injection wells and GRT2 and GPK2 production wells on the surface is represented by a blue line. The red lines are their open-hole trajectories. STJ9 in the inset indicates the position of the gravity observatory of Strasbourg J9
The ECOGI project takes place in Rittershoffen, 6 km east of Soultz-sous-Forêts (Baujard et al. 2017). This EGS geothermal project was initiated in 2004 with the plant dedicated to an industrial use for heat application. It produces 170 °C hot water and delivers 24 megawatts thermal (MWth) heat power to the "Roquette Frères" bio-refinery, 15 km from the drill site. Around 25% of the industrial heat need is covered. GRT-1 injection well was drilled vertically in December 2012. GRT-2 production well was drilled in July 2014 and is deviated to the North, 1 km away from the first well. The wells (see Fig. 2) are at nearly 2.5 km in depth and cross a regional fault. This N355°E normal fault dips 45° to the west and shows an apparent vertical offset of 200 m. The beginning of the production was the 19th of May 2016.
Long-term microgravity monitoring has demonstrated to be a powerful tool to characterize the geothermal reservoir evolution and evaluate flow redistribution during geothermal energy exploitation (Crossley et al. 2013). To report on some successful applications of such methods, we can mention Allis and Hunt (1986) at Wairakei geothermal field in New Zealand, De Zeeuw-van Dalfsen et al. (2006) at Krafla geothermal plant in Iceland, Sugihara and Ishido (2008) at Okuaizu and Ogiri geothermal plants in Japan, Nishijima et al. (2010) at the Takigami geothermal field in Japan. In France, Hinderer et al. (2015) initiated microgravity monitoring at the Soultz-sous-Forêts geothermal plant in summer 2013 with two additional stations near Rittershoffen. In Hinderer et al. (2015), only the 2014 gravity double differences were computed because of a faulty gravimeter used in 2013. We continue here this previous work which described only the natural state of the geothermal reservoirs without any activity. We developed the network at Rittershoffen geothermal plant by adding eleven stations. Then, yearly repetitions have been performed since 2014 for Soultz-sous-Forêts area and since 2015 for Rittershoffen area including the start of geothermal production in 2016 at both plants. Initial results were shown by Portier et al. (2018) focusing on the stability of the reference station and the estimate of vertical deformation with continuous GNSS. We extend here this study by presenting the full gravity monitoring from 2014 to 2017 for the Soultz-sous-Forêts and Rittershoffen geothermal sites taking into account the vertical deformation by repeated geodetic leveling. We also show that our maps of surface gravity change seem to be related to the induced seismicity. Finally, we make an attempt at modeling to explain the positive change in gravity close to the injection zone by a cylindrical body with a width small compared to depth.
Microgravity monitoring
Time-lapse microgravity method investigates the underground mass redistribution; it can gain insight into the geothermal fluid path and help to evaluate the water storage changes.
To cover an approximately 4 km2 area around the geothermal plants, thirteen gravity stations (see Fig. 2) have been measured each summer since 2014 for the Soultz-sous-Forêts network and since 2015 for the Rittershoffen network (Hinderer et al. 2015). Stations 10, 12 and 13 are common to both networks. Scintrex CG5 gravimeters are used in the surveys; this spring relative gravimeter (RG) has a 5 µGal precision (1 µGal = 10−8 m s−2) and a large drift which can reach tens or hundreds of µGal day−1. To correct this instrumental drift, short loops of five stations are operated with measurements beginning and ending at a reference station. The Soultz-sous-Forêts and Rittershoffen reference stations are, respectively, stations 50 and 23. To study the stability of the station 50, absolute measurements are done with an FG5 gravimeter (AG) at least once per year. The precision of this instrument is between one and two µGal. Finally, gravity variations at the reference stations are also assessed through tie measurements with the gravity observatory of Strasbourg J9, where several superconducting gravimeters (SG) operate permanently. The relative SG has a precision better than 0.1 µGal and a small drift of 1 or 2 µGal year−1. By combining these three different types of gravimeter (RG, AG and SG), we apply the hybrid gravity concept introduced by Okubo et al. (2002) to our geothermal object (see also Hinderer et al. 2016).
Gravity measurements have been done with different Scintrex CG5 gravimeters (#40691, #41224, #41317). These instruments were calibrated using an absolute gravity calibration line and a correction coefficient was applied to the associated data for each instrument.
The impact of ambient temperature was also evaluated. Special attention was given to the loop ranging between the reference stations and the gravity observatory of Strasbourg J9 (see Fig. 2) which is inside a bunker. Fores et al. (2017) have demonstrated that ambient temperature can modify Scintrex CG5 measurements by a factor close to − 0.5 µGal/ °C. Indeed, closing a 5 station loop lasts around 2 h. The Scintrex CG5 gravimeter records the relative atmospheric temperature simultaneously to the gravity measurement. The difference of temperature between the measurement at a station and the first measurement at the reference station in a same loop leads to gravity changes ranging between − 2 and 2 µGal. Since the accuracy of Scintrex CG5 gravimeter is 5 µGal, we did not take into account the induced variation.
Gravity data are selected and processed with PyGrav Python software (Hector and Hinderer 2016). The instrumental drift is removed and simple differences \(dg_{{x - x_{0} }}^{t}\) are calculated for each station x (Hinderer et al. 2015) with respect to a reference station x0 at a time t, which has hence a gravity value set to zero.
$$dg_{{x - x_{0} }}^{t} = \left( {g_{x} - g_{{x_{0} }} } \right)_{t}$$
Finally, double differences (Hinderer et al. 2015) are obtained by subtracting the simple differences at time \(t_{0}\) from the one at time t. It represents the gravity variation at station x with respect to a reference station x0 and a reference time t0.
$$Dg_{{x - x_{0} }}^{{t - t_{0} }} = dg_{{x - x_{0} }}^{t} - dg_{{x - x_{0} }}^{{t_{0} }}$$
The error is calculated by taking the square root of the variance sum of each involved measurement.
The observed gravity changes are possibly due to two causes: the Newtonian attraction of the redistributed masses and the vertical deformation in the existing gravity field. Hence, precise leveling surveys were performed to monitor ground displacement.
Vertical deformation control
In principle, vertical ground movements can induce a change of about 2 µGal cm−1 in the Bouguer approximation or nearly 3 µGal cm−1 using the Free-air gradient. This deformation should be accounted for to understand the mass balance of the geothermal reservoir under production (Hunt et al. 2002). Both leveling and GNSS monitoring show negligible vertical ground deformation. We are unable to discriminate among the possible processes of mass redistribution (with variable mass and/or density) working at depth. Hence, we are forced to assume the most likely process, namely the one working at constant density. As a matter of fact, there is a volume change leading to a vertical deformation produced by any mass variation which keeps constant the density at depth. In view of this process, we consider that the most suitable way to account for the vertical ground deformation is through the Bouguer gradient.
Geometric leveling surveys have been performed on the Soultz-sous-Forêts and Rittershoffen gravity stations since 2014 (Ferhat et al. 2017). Five loops were used (see Fig. 3). Each gravimetric site is collocated with a leveling benchmark. In addition to the gravity stations, some benchmarks are included in the national leveling benchmarks installed by the French Mapping Agency (IGN, Institut National de l'Information Géographique et Forestière); so the altitudes are defined with respect to the national vertical reference system (NGF-IGN69). To avoid seasonal displacements linked to hydrological loading, vertical control was done at the same period, i.e., end of May. The reference station is the same as the one of Soultz-sous-Forêts gravity monitoring (station 50). In 2014, double leveling was performed involving two foresights and two backsights measurements but it was time-consuming. Since 2015, surveyors have applied only single-leveling. The LEICA DNA03 digital level and a leveling rod were used to obtain a precision of a few millimeters. Some sections were also observed using invar staff and Trimble DiNi 02 digital level. The mean range of the measurements is about 30 m.
The five loop map of the Soultz-sous-Forêts leveling network connecting gravity stations (red triangles). In addition, Rittershoffen gravity stations are represented with blue triangles; the elevation of stations which are not included in loops has been monitored by GNSS measurement. The leveling and gravity reference is the station 50 (bold red triangle)
Soultz-sous-Forêts elevation changes
The maximum vertical deformation in 2015 compared to 2014 survey at station 50 is equal to 7 mm which leads to a gravity change of − 1.4 µGal (see Table 1). This height-induced gravity change is negligible as the Scintrex CG5 precision is around 5 µGal. In the same way, the ground displacement in 2017 compared to 2014 ranges between − 5 and 15 mm which represent, respectively, 1 and − 3 µGal. We notice that the largest deformation of 15 mm measured at the station 10 is associated with the largest error of 10 mm: this station is the farthest away from the reference station.
Table 1 Height differences (h) and their errors (σh) in mm at gravity stations of Soultz-sous-Forêts network compared to 2014 survey
The stability of the leveling reference station 50 is monitored with a permanent GNSS station called GPK1 (RéNaSS 2018). This station is equipped with a TRIMBLE NETR9 receiver and TRIMBLE Zephyr antenna. GNSS data were processed using SCRS-PPP software. Figure 4 shows the vertical displacement in mm at the leveling reference station 50 from 2014 to the beginning of 2018. The mean vertical precision is 7 mm. The observed seasonal effect is minimized by performing leveling campaign at the same period of the year. We notice that the calculated positions in 2015 and 2017 compared to 2014 during the leveling survey are, respectively, equal to − 5 and 0 mm with an error ranging between 5 and 10 mm. Hence, we consider the reference leveling station as stable.
(From RéNaSS 2018)
Time series of GPK1 permanent GNSS vertical positions. Standard deviations vary between 5 and 10 mm
Is there any link between the observed gravity and height changes? We argue that there is none since the leveling measurements are not correlated to the gravity measurements (see Fig. 5). Indeed, if we consider gravity changes at the closest date of the leveling survey, we find a coefficient of determination R2 = 0.01 indicating no correlation. Its value is the same if we distinguish between data obtained in 2015 and those in 2017 (values measured at the stations 12 and 13 are not presented in this study because they are closer to the Rittershoffen network and hence too far away from the reference).
Gravity double differences (in µGal) as a function of height changes (in mm) in 2015 and 2017 compared to 2014 with their errors. The coefficient of determination is equal to 0.0155
Rittershoffen elevation changes
As quoted before, Rittershoffen stations are quite far away from the leveling reference station. This leads to high errors around 1 cm in height. We do not show the results of leveling for the Rittershoffen network but rather exclusively use the continuous GNSS data collected at ECOG (RéNaSS 2018). As for the GPK1 station, measurements are collected on the Rittershoffen geothermal plant with a TRIMBLE NETR9 receiver and TRIMBLE Zephyr antenna. GNSS data are processed with SCRS-PPP. The accuracy of the height changes retrieved from the GPS data varies between 5 and 10 mm. The GPS height changes (see Fig. 6) measured on the 15th of July 2015, 2016 and 2017 compared to the 15th July of 2014 are, respectively, 0, − 5 and − 2 mm, which correspond to equivalent gravity changes lower than 1 µGal.
Time series of ECOG permanent GNSS vertical positions. Standard deviations vary between 5 and 10 mm
Thus, leveling monitoring at the Soultz-sous-Forêts network shows vertical deformation lower than 1 cm. We observe a higher value at the station 10; nevertheless, the associated error is the highest measured value which could be explained by the long distance to the reference station. Continuous GNSS measurements have proven the stability of the reference station 50 and the ECOG Rittershoffen station. So, we conclude that the observed double differences in gravity must be mostly caused by mass changes (i.e., effect of Newtonian attraction).
Reference station stability
Before the beginning of the production, FG5#206 absolute measurements at the reference station 50 revealed gravity variations ranging between − 2 and 2 µGal. After the beginning of the production, in parallel to absolute measurements, we have studied the gravity variation of the reference stations 23 and 50 with respect to the gravity observatory of Strasbourg J9. Several superconducting gravimeters operate permanently there, especially the new-generation iOSG23, installed in January 2016.
Superconducting gravity data are processed with T-Soft software (Van Camp and Vauterin 2005). The effects of solid Earth and ocean loading tides, atmospheric pressure and polar motion are removed from the gravity signal; spikes are also deleted before modeling the instrumental drift. To better separate instrumental drift from real gravity changes, we superimpose eight absolute measurements to the obtained signal (see Fig. 7). These FG5 data are corrected for the effect of earth and ocean tides, air pressure and polar motion similarly to the SG data. We suppose that data do not superimpose perfectly because the investigated superconducting gravimeter time series is too short to evaluate correctly the instrumental drift. Furthermore, we think that the absolute measurement error is underestimated. We notice that the residual superconducting signal coincides well with the effect of the hydrological loading calculated by MERRA2 global hydrology model (Boy 2018). This model has a spatial and temporal resolution of 0.625° in latitude and longitude and 1 h, respectively.
Gravity variation measured at the observatory of Strasbourg J9 in µGal by iOSG023 superconducting gravimeter (blue curve). This signal is corrected for the effect of solid Earth and ocean loading tide, atmospheric pressure, polar motion and instrumental drift. It is linked to the hydrological loading (red curve) obtained with MERRA2 model (Boy 2018). FG5#206 absolute measurements and their errors corrected for the effect of solid Earth and ocean loading tide, atmospheric pressure, polar motion are represented in green
Finally, by taking into account the gravity variation at the STJ9 reference station (see Fig. 2), we obtain the double differences shown in the Fig. 8. We notice a gravity increase in 2016 of around 15 µGal and then, in 2017, a decrease which leads to null gravity change for the station 50 and − 5 µGal for the station 23 before an increase of 10 µGal. The mean error is 7 µGal for the station 50 and 9 µGal for the station 23.
Gravity double differences and their errors at the reference stations 23 and 50 compared to the observatory of Strasbourg (STJ9) taking the 5th of July 2016 as temporal reference. Two absolute measurements are also displayed with their errors; absolute measurements are corrected for the effect of solid Earth and ocean loading tide, atmospheric pressure, polar motion
Double differences
Rittershoffen network
The Rittershoffen interference tests were performed in May 2016 between GRT1 and GRT2 wells. Then, the production began on the 20th of May 2016. A geothermal fluid of around 80 °C was injected in GRT1 injection well to produce around 170 °C water in GRT2 well. The mean flow rate for the two wells was approximately 110 m3 h−1.
The reference station is here station 23. We choose the third survey of 2015 as temporal reference because of its low error. The Rittershoffen double differences are presented on Fig. 9. In 2015, some double differences (see Eq. 2) exceed the 95% confidence interval: nevertheless, these changes are not linked to the geothermal energy exploitation. After the beginning of the production, we notice a negative change of 20 ± 4 µGal at the station 26 and a positive change of 21 ± 11 µGal at the station 29 in 2016. In 2017, no double differences exceed the 95% confidence interval. If we consider the gravity variation of the reference station 23, no signal is detected.
Rittershoffen gravity double differences in µGal. 95% confidence interval is highlighted in yellow and is centered on the mean survey value (red line). a We do not consider the gravity changes at the reference station 23 (blue dotted line); b accounting for it
Soultz-sous-Forêts network
The Soultz-sous-Forêts interference tests were carried out in May 2016 between the GPK2 injection well and the GPK3 and GPK4 production wells. The production began on the 24th of June 2016. So, GPK2 is the injection well and GPK3 the production well. From the 24th of January to the 30th of June 2017, injection was also done into GPK4 well. The injected flow rate was equal to the produced flow rate which is around 100 m3 h−1. The injected and produced geothermal fluids have, respectively, a temperature of around 80 °C and 165 °C. When two production wells have been used, the GPK4 flow rate was ranging between 29 and 43 m3 h−1 and the GPK3 one was ranging between 57 and 73 m3 h−1.
The gravity reference station is the station 50 and the temporal reference is the first survey in 2014. The Soultz-sous-Forêts double differences (see Eq. 2) are presented in Fig. 10. We do not notice any significant value before the production starts; no double differences exceed the 95% confidence interval. After the beginning of the production, if we do not consider the gravity variation measured at the reference station, positive changes are observed at the stations 7, 6 and 9 near the injection area. The maximum value of 31 ± 6 µGal is recorded at the station 7. Negative changes are obtained at the stations 5, 3, 4 and 11 in 2016 and at the station 2 in 2017 in the north part of the studied area. The minimal value of − 28 ± 10 µGal is measured at the station 2. When we take into account the stability of the reference station, the survey errors increase. Nevertheless, the maximum values recorded at the station 7 and 2 are still significant.
Soultz-sous-Forêts gravity double differences in µGal. 95% confidence interval is highlighted in yellow and is centered on the mean survey value (red line). a We do not consider the gravity changes at the reference station 50 (blue dotted line); b accounting for it
Spatial distribution of the gravity changes
We study the spatial distribution of the gravity double differences before and after the start of the production. For a better understanding of the interplaying processes, gravity measurements have been interpolated by a kriging method and superimposed to the recorded induced seismicity cumulated over 2017 (see Fig. 11b). The induced seismicity informs us about the opening of new cracks and fractures as well as on the preferential pathway of the geothermal fluids. The 2017 interpolated gravity value does not take into account the gravity variation of the reference stations since it would only lead to an offset.
Rittershoffen maps of interpolated gravity double differences in µGal measured on the 23rd of June 2015 (a) and on the 29th of June 2017 (b) before and after the beginning of the production, respectively. The geothermal energy exploitation has begun on the 19th of May 2016. The same interpolation color scale is used for the two maps. The station codes are indicated in red and the double differences in blue. On the 23rd of June 2015, no observation was done at station 20. The isolines values are written in black. Well trajectories are represented in blue with their open-hole in red: GRT1 well is the injection well and GRT2 well is used for the production. We superimposed also the cumulated induced seismicity epicenters from January to July 2017 on the second map. They are concentrated near the injection well
For the Rittershoffen network, we do not notice any coherent signal. The seismicity is mainly concentrated around the GRT1 injection well (Maurer et al. 2017) but it is not associated with any significant positive gravity changes after the beginning of the production.
In Soultz-sous-Forêts, a spatial coherence appears after the beginning of the production (see Fig. 12): the injection area around GPK3 and GPK4 wells is associated with the highest gravity value and the production area near GPK2 well is linked to lower gravity values compared to the reference station 50. Despite stations 7, 8 and 9 are equidistant from the injection wells open holes, the gravity value at the station 7 is higher. This is in agreement with the location of the seismicity epicenters (Maurer et al. 2017) leading us to hypothesize a possible influence from the fracking on the fluid circulation. It seems that after injection geothermal fluids circulate preferentially westwards.
Soultz-sous-Forêts maps of interpolated gravity double difference in µGal measured the 24th of July 2014 (a) and the 4th of July 2017 (b) before and after the beginning of the production, respectively. The geothermal energy exploitation began on the 24th of June 2016. The same interpolation color scale is used for the two maps. The station codes are indicated in red and the double differences in blue. The isolines values are written in black. The wells are represented in blue with their open-hole in red: GPK3 and GPK4 wells are the injection wells and GPK2 well is used for the production. We superimposed also the 2017 cumulated induced seismicity epicenters on the second map
A tentative simplistic model
We do not have any realistic model of mass flow changes available for our study so far. This is why we propose here a rather simple model of Newtonian attraction to explain the largest surface gravity change observed on July 27th, 2016 at the station 7 belonging to Soultz-sous-Forêts network (see Fig. 10). Rather than considering a simple Mogi spherical source, we model a cylindrical body with a radius r and a height h with the top face located at a depth z under the observation point (station 7). Indeed, a cylindrical source could be a more realistic geometry for an open-hole injected fluid into a fractured volume. In this case, the resulting gravity variation gcyl (Kara and Kanli 2005) induced by a cylinder of density \(\rho\) at ground level above the centre of cylinder is
$$g_{\text{cyl}} = \pi r^{2} G\rho \left[ {\frac{1}{z} - \frac{1}{z + h}} \right]$$
assuming that h ≪ z and where G is the gravitational constant equal to 6.67 × 10−11 m3 kg−1 s−2. Taking into account a geothermal fluid density of 1060 kg m−3 (Baujard and Bruel 2006), to induce a gravity effect measurable of 30 µGal (like the one of the 27th of July 2016 at the station 7), we would need to inject:
A mass of 112.5 MT at 5 km depth. This mass corresponds to a geothermal fluid included in a cylinder with 10 m height and 1838 m radius. If we consider an injection flow rate of 100 m3 h−1, this volume would be reached after 121 years.
A mass of 84 kT at 137 meters depth in a cylinder of 50 m radius and 10 m height. It corresponds to 33 injection days with a 100 m3 h−1 flow rate.
In the first case, we respect the injection depth (5 km) considering tube wells. It leads to an unreasonably long period of injection. In the second case (137 m), we respect the injected volume as the production has begun on the 24th of June 2016 and the gravity change was measured on the 27th of July 2016, but the cylinder depth linked to geothermal activity is unrealistic because the rather superficial depth is impossible in the context of the tube well. Furthermore, the lowest depth of the Soultz-sous-Forêts induced seismicity is around 3 km and it is reached by only a few events.
Moreover, four points have to be considered in addition:
The open-hole part of the injection wells is not under the station 7 (about 1 km to the north-east). Since the surface gravity due to a buried cylinder decreases away from its centre, we would even need a larger injected volume than computed previously at 5 km depth to explain the 30 μGal changes. The second option where we locate the real injected volume at 137 m would lead to a negligible gravity change at station 7.
We extract the same volume than the injected volume once the geothermal loop is established. Our model explaining the increase of 30 μGal at station 7 by a single cylinder does not take into account the effect of a second cylinder with a mass deficit linked to the extraction close to station 2 (see Fig. 12). This would also lead to smaller changes at station 7 when adding the effects of both cylinders.
If the water volume is located in a porous medium, the size of the cylinder radius would be larger and be able to modify the shape of the surface gravity anomaly but not its maximum amplitude at the center.
Indeed, we have studied the influence of the cylinder radius on gravity double differences above the cylinder centre in the two mentioned cases, i.e., the first one considering the real injection depth of Soultz-sous-Forêts geothermal plant and the second one considering the cumulated injected volume on the 27th of July 2016. Hence, we defined two models which lead to a 30 µGal gravity change like the one measured at the station 7. We considered the density fixed (1060 kg/m3).
In the case of a 5000 m depth of the cylinder, the model requires an injected mass of 112.5 MT which leads to a volume of 1.06 × 108 m3. We have studied the impact of the radius length r. Assuming that h ≪ z, we choose to work with a maximum height of 50 m (= z/100) obtained for a radius of 822 m. As shown by Fig. 13, the height of the cylinder h decreases in conjunction with the increase of the radius when considering a fixed volume. Moreover, the radius does not impact significantly the gravity above the cylinder centre.
Height of the cylinder (top) and gravity value at the ground level above the centre of the cylinder (bottom) compared to the radius of the cylinder. A mass of 112.5 MT with a density of 1060 kg m−3 is injected at 5 km in depth
The real injected mass is 84 kT which corresponds to a volume of 7.9 × 104 m3 and the model requires to locate this mass at 137 m. We choose to work with a radius of 50 m and a height of 10 m. As shown by Fig. 14, the height of the cylinder decreases with increasing radius and the gravity only slightly increases. Once again, the radius does not impact significantly the gravity above the cylinder centre.
Height of the cylinder (top) and gravity value at the ground level above the centre of the cylinder (bottom) compared to the cylinder radius. A mass of 84 kT with a density of 1060 kg m−3 is injected at 137 m in depth
We do not consider the density variation with the temperature. While a density of 1060 kg m−3 is measured at 20 °C (Baujard and Bruel 2006), the injected and produced fluids have a temperature of 80 °C and 165 °C, respectively.
We define the boundary limits of our models taking into account the density changes. We have studied two sub-cases for the two previous cases considering the real injection depth and the real injected volume. We impose in one sub-case a density value of 969 kg m−3 (160 °C) and in the second one a density value of 1033 kg m−3 (80 °C). We assume that the mass is conserved leading to a change in volume (elastic medium). However, we do not consider the effect of the pressure. We fix the height of the cylinder at 10 m. The volume changes of the cylinder due to the temperature-induced density changes are shown in Table 2 for the real depth case and in Table 3 for the real injected mass case.
Table 2 Values of the radius (in m), the volume (in m3) and the mass (in MT) of the cylinder for a geothermal fluid of 80 °C (sub-case 1) and 160 °C (sub-case 2)
Table 3 Values of the radius (in m), the volume (in m3) and the mass (in kT) of the cylinder for a geothermal fluid of 80 °C (sub-case 1) and 160 °C (sub-case 2)
The discussion of the four points above shows that it is not possible with our simple cylindrical model to explain a 30 μGal gravity change close to the injection zone in Soultz-sous-Forêts. Other prismatic bodies with possible large dip angles (faults, fractures) would be more realistic but are beyond the scope of our study.
Microgravity monitoring has been performed at the Soultz-sous-Forêts and Rittershoffen geothermal sites. Two relative gravity network of thirteen gravity stations each have been established and repeatedly measured each summer since 2014 for the Soultz-sous-Forêts network and since 2015 for the Rittershoffen network. The networks have been measured with a Scintrex CG5 gravimeter. The stability of the reference stations of both networks was studied thanks to FG5#206 absolute measurements but also thanks to tie measurements with respect to the gravity observatory of Strasbourg J9, where gravity variations are continuously recorded by a new-generation iOSG023 superconducting gravimeter. Precise leveling monitoring showed that vertical deformation is negligible (lower than 1 cm). Furthermore, we checked that the vertical deformation appears to be unrelated to gravity changes recorded at the Soultz-sous-Forêts network. This means that gravity double differences are only caused by the Newtonian attraction due to the geothermal fluid redistribution at depth. On the Rittershoffen network, we do not detect any signal after the beginning of the production of the plant. On the contrary, a differential signal appears on the Soultz-sous-Forêts network with higher double differences gravity values measured near the injection area and lower values near the production area. Moreover, this observation is coherent with the location of induced seismicity epicenters, which should be related to the preferential path of the geothermal fluids. Nevertheless, a simple model using a cylindrical shape for the geothermal reservoir cannot explain this result. Despite the small signals (or lack of signals) we observed here in this study, we still believe that gravity monitoring is a powerful method to estimate the mass balance during long-term geothermal energy exploitation, especially for large production geothermal plants like the ones in Iceland or Indonesia.
Aichholzer C, Duringer P, Orciani S, Genter A. New stratigraphic interpretation of the twenty-eight-year old GPK-1 geothermal well of Soultz-sous-Forêts (Upper Rhine Graben, France). In: 4th European Geothermal Workshop EGW. Strasbourg. 2015. https://doi.org/10.1186/s40517-016-0055-7.
Allis RG, Hunt TM. Analysis of exploitation-induced gravity changes at Wairakei geothermal field. Geophysics. 1986;51(8):1647–60. https://doi.org/10.1190/1.1442214.
Baillieux P, Schill E, Abdelfettah Y, Dezayes C. Possible natural fluid pathways from gravity pseudo-tomography in the geothermal fields of Northern Alsace (Upper Rhine Graben). Geothermal Energy. 2014;2:16. https://doi.org/10.1186/s40517-014-0016-y.
Baujard C, Bruel D. Numerical study of the impact of fluid density on the pressure distribution and stimulated volume in the Soultz HDR reservoir. Geothermics. 2006;35:5–6. https://doi.org/10.1016/j.geothermics.2006.10.004.
Baujard C, Genter A, Dalmais E, Maurer V, Hehn R, Rosillette R, Vidal J, Schmittbuhl J. Hydrothermal characterization of wells GRT-1 and GRT-2 in Rittershoffen, France: implications on the understanding of natural flow systems in the rhine graben. Geothermics. 2017;65:255–68. https://doi.org/10.1016/j.geothermics.2016.11.001.
Boy J-P. http://loading.u-strasbg.fr/surface_gravity.php. Accessed 20 Aug 2018.
Crossley D, Hinderer J, Riccardi U. The measurement of surface gravity. Rep Prog Phys. 2013;76 (4):046101. https://doi.org/10.1088/0034-4885/76/4/046101
Dezayes C, Gentier S, Genter A. Deep geothermal energy in western Europe: the Soultz-sous-Forêts project. BRGM/RP-54227-FR; 2005.
De Zeeuw-van Dalfsen E, Rymer H, Williams-Jones G, Sturkell E, Sigmundsson F. Integration of micro-gravity and geodetic data to constrain shallow system mass changes at Krafla Volcano, N Iceland. Bull Volcanol. 2006;68:420–31. https://doi.org/10.1007/s0445-005-0018-5.
Ferhat G, Portier N, Hinderer J, Calvo Garcia-Maroto M, Abdelfettah Y, Riccardi U. 3 years of monitoring using leveling and hybrid gravimetry applied to the geothermal sites of Soultz-sous-Forêts and Rittershoffen, Rhine Graben, France. INGEO International Conference on Engineering Surveying, Lisbon, Portugal, 18–20 October 2017.
Fores B, Champollion C, Le Moigne N, Chery J. Impact of ambient temperature on spring-based relative gravimeter measurements. J Geod. 2017;91:269–77. https://doi.org/10.1007/s00190-016-0961-2.
Genter A, Evans K, Cuenot N, Fritsch D, Sanjuan B. Contribution of the exploration of deep crystalline fractured reservoir of Soultz-sous-Forêts to the knowledge of enhanced geothermal systems (EGS). CR Geosci. 2010;342:502–16. https://doi.org/10.1016/j.crte.20.
Hector B, Hinderer J. PyGrav, a Python-based program for handling and processing relative gravity data. Comput Geosci. 2016;91:90–7. https://doi.org/10.1016/j.cageo.2016.03.010.
Hinderer J, Calvo M, Abdelfettah Y, Hector B, Riccardi U, Ferhat G, Bernard J-D. Monitoring of a geothermal reservoir by hybrid gravimetry; feasibility study applied to the Soultz-sous-Forêts and Rittershoffen sites in the Rhine graben. Geothermal Energy. 2015. https://doi.org/10.1186/s40517-015-0035-3.
Hinderer J, Hector B, Mémin A, Calvo M. Hybrid gravimetry as a tool to monitor surface and underground mass changes. In: International Association of Geodesy Symposia. Berlin: Springer. 2016. https://doi.org/10.1007/1345_2016_253.
Hunt T, Sugihara M, Sato T, Takemura T. Measurement and use of the vertical gravity gradient in correcting repeat microgravity measurements for the effects of ground subsidence in geothermal systems. Geothermics. 2002;31:525–43. https://doi.org/10.1016/S03756505(02)00010-X.
Kara I, Kanli A. Nomograms for interpretation of gravity anomalies of a vertical cylinder. J Balkan Geophys Soc. 2005;8(1):1–6.
Lu H. A global review of enhanced geothermal system (EGS). Renew Sustain Energy Rev. 2018;81:2902–21. https://doi.org/10.1016/j.rser.2017.06.097.
Maurer V, Cuenot N, Richard A, Grunberg M. On-going seismic monitoring of the Rittershoffen and the Soultz EGS projects (Alsace, France). In: 2nd Induced Seismicity Workshop, 14–17 March 2017, Schatzalp, Switzerland. 2017.
Meyer R, Foulger GR. The European Cenozoic Volcanic Province is not caused by mantle plumes. 2007. http://www.mantleplumes.org/Europe.htm.
Nishijima J, Saibi H, Sofyan Y, Shimose S, Fujimitsu Y, Ehara S, Fukuda Y, Hasegawa T, Taniguchi M. Reservoir monitoring using hybrid micro-gravity measurements in the Takigami geothermal field, Central Kyushu, Japan. In: Proceedings World Geothermal Congress, Bali, Indonesia, 25–29 April 2010. 2010.
Okubo S, Satomura M, Furuya M, Sun W, Matsumoto S, Ueki S, Watanabe H. Grand design for the hybrid gravity network around the Mt. Fuji volcano. In: International symposium on geodesy in Kanazawa abstract. 2002. p. 39–40.
Portier N, Hinderer J, Riccardi U, Ferhat G, Calvo M, Abdelfettah Y, Heimlich C, Bernard J-D. Hybrid gravimetry monitoring of Soultz-sous-Forêts and Rittershoffen geothermal sites (Alsace, France). Geothermics. 2018;201:219–76. https://doi.org/10.1016/j.geothermics.2018.07.008
RéNaSS (Réseau National de Surveillance Sismique). https://eost.unistra.fr/observatoires/geodesie-et-gravimetrie/renag-eost/gotherm/. Accessed 20 Aug 2018.
Sugihara M, Ishido T. Geothermal reservoir monitoring with a combination of absolute and relative gravimetry. Geophysics. 2008;73(6):WA37–47.
Valley B, Evans KF. Stress state at Soultz-sous-Forêts to 5 km depth from wellbore failure and hydraulic observations. In: Proceedings, thirty-second workshop on geothermal reservoir engineering. Standford University, Standford, California, USA, SGP-TR-183. 2007.
Van Camp M, Vauterin P. Tsoft: graphical and interactive software for the analysis of time series and Earth tides. Comput Geosci. 2005;31(5):631–40. https://doi.org/10.1016/j.cageo.2004.11.015.
JH, UR, MC, YA and NP have acquired, processed and interpreted gravity data. GF has acquired, processed and interpreted precise leveling data. J-DB has acquired and processed FG5#206 absolute measurements. All authors read and approved the final manuscript.
We thank ES-G (Electricité de Strasbourg-Géothermie) for providing useful information on the geothermal activity of the Soultz-sous-Forêts and Rittershoffen sites. We also thank M. Grunberg for providing the location of the induced seismicity. We thank the two anonymous referees for their useful comments. N. Portier thanks Région Grand Est for the partial funding of her Ph.D.
The datasets supporting the conclusions of this article are available on:
http://cdg.u-strasbg.fr/PortailEOST/Gravi/v1/ (superconducting data).
https://eost.unistra.fr/observatoires/geodesie-et-gravimetrie/renag-eost/gotherm/ (GNSS data).
http://loading.u-strasbg.fr/surface_gravity.php (hydrological loading).
We do not share gravity measurements, leveling data and induced seismicity epicenters; the authors signed a confidentiality agreement with ES-G (Electricité de Strasbourg-Géothermie).
This work has been published under the framework of the LABEX ANR-11-LABX-0050\_G-EAU-THERMIE-PROFONDE and benefits from a funding from the state managed by the French National Research Agency as part of the Investments program for the Future.
Institut de Physique du Globe de Strasbourg, UMR 7516, Université de Strasbourg/CNRS, 5 Rue Descartes, 67084, Strasbourg, France
Nolwenn Portier, Jacques Hinderer, Gilbert Ferhat, Yassine Abdelfettah & Jean-Daniel Bernard
Università degli Studi di Napoli Federico II, Dipartimento di Scienze della Terra, dell'Ambiente e delle Risorse DiSTAR, Via Vicinale Cupa Cintia 21, Complesso Universitario di Monte S. Angelo, Edificio L, 80126, Napoli, Italy
Umberto Riccardi
INSA, Strasbourg 24 Boulevard de la Victoire, 67084, Strasbourg, France
Gilbert Ferhat
Instituto Geografico Nacional, C/Alfonso XII 3, 28014, Madrid, Spain
Marta Calvo
Nolwenn Portier
Jacques Hinderer
Yassine Abdelfettah
Jean-Daniel Bernard
Correspondence to Nolwenn Portier.
Portier, N., Hinderer, J., Riccardi, U. et al. New results on the gravity monitoring (2014–2017) of Soultz-sous-Forêts and Rittershoffen geothermal sites (France). Geotherm Energy 6, 19 (2018). https://doi.org/10.1186/s40517-018-0104-5
Gravity changes
Soultz-sous-Forêts
Rittershoffen
Integrative Approaches in Deep Geothermal Science
|
CommonCrawl
|
Linear algebra for quantum computing
Linear algebra is the language of quantum computing. Although you don't need to know it to implement or write quantum programs, it is widely used to describe qubit states, quantum operations, and to predict what a quantum computer does in response to a sequence of instructions.
Just like being familiar with the basic concepts of quantum physics can help you understand quantum computing, knowing some basic linear algebra can help you understand how quantum algorithms work. At the least, you'll want to be familiar with vectors and matrix multiplication. If you need to refresh your knowledge of these algebra concepts, here are some tutorials that cover the basics:
Jupyter notebook tutorial on linear algebra
Jupyter notebook tutorial on complex arithmetic
Linear Algebra for Quantum Computation
Quantum Computation Primer
Vectors and matrices in quantum computing
A qubit can be in a state of 1 or 0 or a superposition of both. Using linear algebra, the state of a qubit is described as a vector and is represented by a single column matrix $\begin{bmatrix} a \\ b \end{bmatrix}$. It is also known as a quantum state vector and must meet the requirement that $|a|^2 + |b|^2 = 1$.
The elements of the matrix represent the probability of the qubit collapsing one way or the other, with $|a|^2$ being the probability of collapsing to zero, and $|b|^2$ being the probability of collapsing to one. The following matrices all represent valid quantum state vectors:
$$\begin{bmatrix} 1 \\ 0 \end{bmatrix}, \begin{bmatrix} 0 \\ 1 \end{bmatrix}, \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{2}} \end{bmatrix}, \begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{-1}{\sqrt{2}} \end{bmatrix}, \text{ and }\begin{bmatrix} \frac{1}{\sqrt{2}} \\ \frac{i}{\sqrt{2}} \end{bmatrix}.$$ Quantum operations can also be represented by a matrix. When a quantum operation is applied to a qubit, the two matrices that represent them are multiplied and the resulting answer represents the new state of the qubit after the operation.
Here are two common quantum operations represented with matrix multiplication.
The X operation is represented by the Pauli matrix $X$,
$$X = \begin{bmatrix} 0 & 1 \\ 1 & 0 \end{bmatrix},$$
and is used to flip the state of a qubit from 0 to 1 (or vice-versa), for example
$$\begin{bmatrix}0 &1\\ 1 &0\end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix} = \begin{bmatrix} 0 \\ 1 \end{bmatrix}.$$
The 'H' operation is represented by the Hadamard transformation $H$,
$$H = \dfrac{1}{\sqrt{2}}\begin{bmatrix}1 &1\\ 1 &-1\end{bmatrix},$$
and puts a qubit into a superposition state where it has an even probability of collapsing either way, as shown here
$$\frac{1}{\sqrt{2}}\begin{bmatrix}1 &1\\ 1 &-1\end{bmatrix}\begin{bmatrix} 1 \\ 0 \end{bmatrix}=\frac{1}{\sqrt{2}}\begin{bmatrix} 1 \\ 1 \end{bmatrix}.$$
Notice that $|a|^2 =|b|^2 = \frac{1}{2}$, meaning that the probability of collapsing to zero and one state is the same.
A matrix that represents a quantum operation has one requirement – it must be a unitary matrix. A matrix is unitary if the inverse of the matrix is equal to the conjugate transpose of the matrix.
Representing two-qubit states
In the examples above, the state of one qubit was described using a single column matrix $\begin{bmatrix} a \\ b \end{bmatrix}$, and applying an operation to it was described by multiplying the two matrices. However, quantum computers use more than one qubit, so how do you describe the combined state of two qubits?
The real power of quantum computing comes from leveraging multiple qubits to perform computations. For a deeper dive into this topic, see Operations on multiple qubits.
Remember that each qubit is a vector space, so they can't just be multiplied. Instead, you use a tensor product, which is a related operation that creates a new vector space from individual vector spaces, and is represented by the $\otimes$ symbol. For example, the tensor product of two qubit states $\begin{bmatrix} a \\ b \end{bmatrix}$ and $\begin{bmatrix} c \\ d \end{bmatrix}$ is calculated
$$ \begin{bmatrix} a \\ b \end{bmatrix} \otimes \begin{bmatrix} c \\ d \end{bmatrix} =\begin{bmatrix} a \begin{bmatrix} c \\ d \end{bmatrix} \\ b \begin{bmatrix}c \\ d \end{bmatrix} \end{bmatrix} = \begin{bmatrix} ac \\ ad \\ bc \\ bd \end{bmatrix}. $$
The result is a four-dimensional matrix, with each element representing a probability. For example, $ac$ is the probability of the two qubits collapsing to 0 and 0, $ad$ is the probability of 0 and 1, and so on.
Just as a single qubit state $\begin{bmatrix} a \\ b \end{bmatrix}$ must meet the requirement that $|a|^2 + |b|^2 = 1$ in order to represent a quantum state, a two-qubit state $\begin{bmatrix} ac \\ ad \\ bc \\ bd \end{bmatrix}$ must meet the requirement that $|ac|^2 + |ad|^2 + |bc|^2+ |bd|^2 = 1$.
Linear algebra is the standard language for describing quantum computing and quantum physics. Even though the libraries included with the Microsoft Quantum Development Kit helps you run advanced quantum algorithms without diving into the underlying math, understanding the basics helps you get started quickly and provide a solid foundation to build on.
Install the QDK
|
CommonCrawl
|
Fast Fourier Transform for Convolution
10-04-2021 10-04-2021 blog 9 minutes read (About 1345 words) 0 visits
The convolution and cross-correlation computation for two discrete sequences using the definition is costly because the asymptotic complexity is $O(N^2)$ where $N$ is the length of the sequence. The convolution theorem suggests that the convolution and cross-correlation could be computed using Fourier transform. There are asymptotically faster $O(N \log N)$ faster Fourier transform algorithms to compute Fourier transform which makes convolution and cross-correlation computation also asymptotically faster.
In this blog post, I would like to discuss Fourier transform, convolution theorem, and why convolution in neural networks could be computed asymptotically faster using faster Fourier transform.
Fourier Transform
Continuous Fourier transform.
\begin{align}
F(k) &= \mathcal{F}_{x} \{f(x)\}(k) \\
&= \int_{-\infty}^{\infty} f(x) e^{-2\pi ikx} dx \\
\end{align}
Discrete-time Fourier transform.
F[k] &= \mathcal{F}_{x} \{f[x]\}[k] \\
&= \sum_{x = -\infty}^{\infty} f[x] e^{-2\pi ikx} \\
Inverse Fourier Transform
Continuous inverse Fourier transform.
f(x) &= \mathcal{F}^{-1}_{k} \{F(k)\}(x) \\
&= \int_{-\infty}^{\infty} F(k) e^{2\pi ikx} dk \\
Discrete-time inverse Fourier transform.
f[x] &= \mathcal{F}^{-1}_{k} \{F[k]\}[x] \\
&= \sum_{k = -\infty}^{\infty} F[k] e^{2\pi ikx} \\
Convolution Theorem
For continuous complex-valued functions $f$ and $g$, the convolution is defined as
\begin{gather}
(f \star g)(x) = \int_{-\infty}^{\infty} f(\tau) g(x - \tau) d \tau
\end{gather}
Similarly, for discrete sequences, the convolution is defined as
(f \star g)[n] = \sum_{m = -\infty}^{\infty} f[m] g[n - m]
Here we denote the convolution operation using $\star$.
Consider two continuous functions $g(x)$ and $h(x)$ with Fourier transforms $G$ and $H$
G(k) &= \mathcal{F}_{x} \{g(x)\}(k) \\
&= \int_{-\infty}^{\infty} g(x) e^{-2\pi ikx} dx \\
H(k) &= \mathcal{F}_{x} \{h(x)\}(k) \\
&= \int_{-\infty}^{\infty} h(x) e^{-2\pi ikx} dx \\
We define the convolution outcome $r(x)$
r(x) &= (g \star h)(x) \\
&= \int_{-\infty}^{\infty} g(\tau) h(x - \tau) d \tau \\
The convolution theorem states that
R(k) &= \mathcal{F}_{x} \{r(x)\}(k) \\
&= G(k) H(k) \\
&= (G \cdot H)(k) \\
With the inverse Fourier transform, we have
&= \mathcal{F}^{-1}_{k} \{R(k)\}(x) \\
&= \mathcal{F}^{-1}_{k} \{ R(k) \}(x) \\
&= \mathcal{F}^{-1}_{k} \{ (G \cdot H)(k) \}(x) \\
r(x)
&= \int_{-\infty}^{\infty} R(k) e^{2\pi ikx} dk \\
This means that to compute the convolution $(g \star h)(x)$, instead of doing direct summation using the convolution definition, we can also do inverse Fourier transform on the function $(G \cdot H)(k)$.
We would like to show a quick proof for the convolution theorem.
The convolution theorem could be proved using Fubini's Theorem.
&= \int_{-\infty}^{\infty} r(x) e^{-2\pi ikx} dx \\
&= \int_{-\infty}^{\infty} \Big( \int_{-\infty}^{\infty} g(\tau) h(x - \tau) d \tau \Big) e^{-2\pi ikx} dx \\
&= \int_{-\infty}^{\infty} g(\tau) \Big( \int_{-\infty}^{\infty} h(x - \tau) e^{-2\pi ikx} d x \Big) d \tau \\
&= \int_{-\infty}^{\infty} g(\tau) \Big( \int_{-\infty}^{\infty} h(x - \tau) e^{-2\pi ik(x - \tau)} e^{-2\pi ik\tau} d (x - \tau) \Big) d \tau \\
&= \int_{-\infty}^{\infty} g(\tau) e^{-2\pi ik\tau} \Big( \int_{-\infty}^{\infty} h(x - \tau) e^{-2\pi ik(x - \tau)} d (x - \tau) \Big) d \tau \\
&= \int_{-\infty}^{\infty} g(\tau) e^{-2\pi ik\tau} \Big( \int_{-\infty}^{\infty} h(x^{\prime}) e^{-2\pi ikx^{\prime}} d x^{\prime} \Big) d \tau \\
&= \int_{-\infty}^{\infty} g(\tau) e^{-2\pi ik\tau} H(k) d \tau \\
&= H(k) \int_{-\infty}^{\infty} g(\tau) e^{-2\pi ik\tau} d \tau \\
This concludes the proof.
Similarly, for two discrete sequences $g[x]$ and $h[x]$ with discrete-time Fourier transforms (DTFT) $G$ and $H$
G[k] &= \mathcal{F}_{x} \{g[x]\}[k] \\
&= \sum_{x = -\infty}^{\infty} g[x] e^{-2\pi ikx} \\
H[k] &= \mathcal{F}_{x} \{h[x]\}[k] \\
&= \sum_{x = -\infty}^{\infty} h[k] e^{-2\pi ikx} \\
We define the convolution outcome $r[x]$
r[x] &= (g \star h)[x] \\
&= \sum_{\tau = -\infty}^{\infty} g[\tau] h[x - \tau] \\
R[k]
&= \mathcal{F}_{x} \{r[x]\}(k) \\
&= G[k] H[k] \\
&= (G \cdot H)[k] \\
&= \mathcal{F}^{-1}_{k} \{R[k]\}[x] \\
&= \mathcal{F}^{-1}_{k} \{ R[k] \}[x] \\
&= \mathcal{F}^{-1}_{k} \{ (G \cdot H)[k] \}[x] \\
r[x]
&= \mathcal{F}^{-1}_{k} \{R[k]\}(x) \\
&= \sum_{k = -\infty}^{\infty} R[k] e^{2\pi ikx} \\
We will skip the proof for the discrete sequences case, as the proof is almost the same as the one for the continuous function case.
So, in short, the convolution theorem states that
\mathcal{F}\{g \star h\} = \mathcal{F}\{g\} \cdot \mathcal{F}\{h\}
g \star h = \mathcal{F}^{-1} \{ \mathcal{F}\{g\} \cdot \mathcal{F}\{h\} \}
"Cross-Correlation" Theorem
Cross-Correlation
For continuous complex-valued functions $f$ and $g$, the cross-correlation is defined as
(f \ast g)(x) = \int_{-\infty}^{\infty} \overline{f(\tau)} g(x + \tau) d \tau
Similarly, for discrete sequences, the cross-correlation is defined as
(f \ast g)[n] = \sum_{m = -\infty}^{\infty} \overline{f[m]} g[n + m]
Because the "cross-correlation" theorem can be derived analogous to the convolution theorem, we will skip most of the details and derivations. The "cross-correlation" theorem states that
\mathcal{F}\{g \ast h\} = \overline{\mathcal{F}\{g\}} \cdot \mathcal{F}\{h\}
g \ast h = \mathcal{F}^{-1} \{ \overline{\mathcal{F}\{g\}} \cdot \mathcal{F}\{h\} \}
Given a discrete sequence $f[x]$ of length $N$, to compute its discrete-time Fourier transform sequence $F[k]$ of length $N$ using the direct summation, it is not difficult to see that it takes at least $N \times N$ multiplication and summation. So the asymptotic complexity for the vanilla Fourier transform is $O(N^2)$.
There are fast Fourier transform (FFT) which can do the same transform in an asymptotic complexity of $O(N \log N)$. Therefore, when $N$ is very large, doing fast Fourier transform will be much faster than doing vanilla discrete-time Fourier transform. We will not talk about the fast Fourier transform algorithms in this blog post.
Given two discrete sequences $g[x]$ and $h[x]$, to compute the convolution using fast Fourier transform, we have to do two Fourier transform for the two discrete sequences $g[x]$ and $h[x]$ ($O(N \log N)$), compute their dot product ($O(N)$), and do another inverse fast Fourier transform $O(N \log N)$.
The asymptotic complexity is $O(N \log N)$, whereas computing the convolution using convolution definition takes $O(N^2)$.
For the convolution in neural networks, when the input size and the kernel size are very large, computing convolution (actually it is cross-correlation) using fast Fourier transform will be much faster than computing using the definition. However, for most of the neural networks we have seen nowadays, the input size and the kernel size are usually large enough for fast Fourier transform to outperform the vanilla computation using the convolution definition.
Cross-Correlation - Wikipedia
Convolution - Wikipedia
Convolution Theorem - Wikipedia
https://leimao.github.io/blog/Fast-Fourier-Transform-Convolution/
Licensed under
Deep Learning, Mathematics
Like this article? Support the author with
Alipay Wechat Buy me a coffeePatreon
Ed R. Levin County Park徒步
Cross-Correlation VS Convolution
1Introduction
2Fourier Transform
2.1Fourier Transform
2.2Inverse Fourier Transform
3Convolution Theorem
3.1Convolution
3.2Convolution Theorem
4"Cross-Correlation" Theorem
4.1Cross-Correlation
4.2"Cross-Correlation" Theorem
5Fast Fourier Transform for Convolution
6Conclusion
7References
|
CommonCrawl
|
Quantitative assessment of the effectiveness of joint measures led by Fangcang shelter hospitals in response to COVID-19 epidemic in Wuhan, China
Hui Jiang1,2 na2,
Pengfei Song3 na2,
Siyi Wang4 na2,
Shuangshuang Yin3,
Jinfeng Yin1,
Chendi Zhu1,2,
Chao Cai5,
Wangli Xu4 na1 &
Weimin Li1,2,6 na1
To quantitatively evaluate the effectiveness of Fangcang shelter hospitals, designated hospitals, and the time interval from illness onset to diagnosis toward the prevention and control of the COVID-19 epidemic.
We used SEIAR and SEIA-CQFH warehouse models to simulate the two-period epidemic in Wuhan and calculate the time dependent basic reproduction numbers (BRNs) of symptomatic infected individuals, asymptomatic infected individuals, exposed individuals, and community-isolated infected individuals. Scenarios that varied in terms of the maximum numbers of open beds in Fangcang shelter hospitals and designated hospitals, and the time intervals from illness onset to hospitals visit and diagnosis were considered to quantitatively assess the optimal measures.
The BRN decreased from 4.50 on Jan 22, 2020 to 0.18 on March 18, 2020. Without Fangcang shelter hospitals, the cumulative numbers of cases and deaths would increase by 18.58 and 51.73%, respectively. If the number of beds in the designated hospitals decreased by 1/2 and 1/4, the number of cumulative cases would increase by 178.04 and 92.1%, respectively. If the time interval from illness onset to hospital visit was 4 days, the number of cumulative cases and deaths would increase by 2.79 and 6.19%, respectively. If Fangcang shelter hospitals were not established, the number of beds in designated hospitals reduced 1/4, and the time interval from visiting hospitals to diagnosis became 4 days, the cumulative number of cases would increase by 268.97%.
The declining BRNs indicate the high effectiveness of the joint measures. The joint measures led by Fangcang shelter hospitals are crucial and need to be rolled out globally, especially when medical resources are limited.
In December 2019, the coronavirus disease 2019 (COVID-19) outbreak occurred in Wuhan, China. Subsequently, it occurred in many countries around the world, after which WHO announced COVID-19 as a global pandemic on 11 March, 2020 [1]. In the early period of the epidemic in Wuhan, thousands of cases rushed to hospitals, pressurizing the city's medical system [2]. To manage the serious situation of COVID-19, strategies of joint prevention and control through a triage were adopted. Mild/moderate cases and close contacts were isolated in community and quarantine points, severe and critical cases were admitted to designated hospitals.
By implementing multiple prevention and control measures, such as community isolation, quarantine point isolation, designated hospitals, and employing new diagnostic and intervention techniques, rational allocation of medical resources and services was guaranteed during the COVID-19 epidemic. However, there was still a huge severity of treatment pressure, such as lack of beds and medical resources. To relieve pressure, 86 designated hospitals providing approximately 24,000 beds were rebuilt and re-established [3], and a total of 344 national medical teams with a medical staff of 42,322 were dispatched [4]. In addition, from February 05, Wuhan successively established and opened 16 Fangcang shelter hospitals, providing about 13,000 beds to admit mild/moderate cases symptoms. The implementation of multi-stage joint measures and multi-sectoral division of labor and cooperation have played an important role in the response to COVID-19. However, although the function of Fangcang shelter hospitals has been defined in previous studies [2], the role of these hospitals in joint measures has not been quantitatively evaluated. Further, no study has quantitatively evaluated the role of joint measures such as establishing Fangcang shelter hospitals, expanding designated hospitals, and shortening the time interval from illness onset to diagnosis in response to the COVID-19 epidemic.
In this study, we evaluated the effectiveness of joint measures led by Fangcang shelter hospitals after the closure of Wuhan on January 23, 2020 [3]. In addition, to include the asymptomatic infected individuals, we extended the classic suspected-exposed-infected-recovered (SEIR) transmission model to the suspected-exposed-symptomatic infected-asymptomatic infected-recovered (SEIAR) model for describing the epidemiological characteristics. Four additional compartments (community isolation [C], quarantine point isolation [Q], Fangcang shelter hospitals [F], and designated hospitals [H]) were added to quantitatively assess Fangcang shelter hospitals for the COVID-19 epidemic in Wuhan from January 23, 2020 to March 18, 2020.
Data sources and collection
From the National Health Commission of the People's Republic of China and WHO, we collected the numbers of newly confirmed cases, cumulative confirmed cases, and deaths of COVID-19 in Wuhan from January 23 to March 18, 2020 [1, 5]. The data of the maximum numbers of open beds in Fangcang shelter hospitals (Supplementary Figure 1A), designated hospitals (Supplementary Figure 1B), and quarantine points were obtained from Wuhan Municipal Health Commission [3].
SEIAR and SEIAR-CQFH model to stimulate two-period epidemic in Wuhan
Based on the date of Wuhan lockdown, we divided the Wuhan epidemic into two periods: December 7, 2019 to 2020 January 22 and January 23, 2020 to March 18, 2020; SEIAR and SEIA-CQFH warehouse models were employed to simulate these two periods of Wuhan epidemic, respectively. For the first period epidemic, we extended the basic SEIR model to the SEIAR model by enrolling asymptomatic infected individuals as follows (Fig. 1a-b) [6].
$$ \left\{\begin{array}{l}\frac{\mathrm{d}S}{\mathrm{d}\mathrm{t}}=-{\beta}_0\left(I+{f}_AA+{f}_EE\right)\frac{S}{N},\\ {}\frac{\mathrm{d}E}{\mathrm{d}\mathrm{t}}={\beta}_0\left(I+{f}_AA+{f}_EE\right)\frac{S}{N}-\sigma E,\\ {}\frac{\mathrm{d}I}{\mathrm{d}\mathrm{t}}=\rho \sigma E-{\gamma}_II-{\alpha}_II,\\ {}\frac{\mathrm{d}A}{\mathrm{d}\mathrm{t}}=\left(1-\rho \right)\sigma E-{\gamma}_AA,\\ {}\frac{\mathrm{d}R}{\mathrm{d}\mathrm{t}}={\gamma}_II+{\gamma}_AA,\end{array}\right. $$
where S(t), E(t), I(t), A(t), R(t) and N = S(t) + E(t) + I(t) + A(t) + R(t) are the number of susceptible, exposed, symptomatic infected, asymptomatic infected, recovered individuals and the total population of Wuhan at time t, respectively. The functions S(t), E(t), I(t), A(t), R(t) that are dependent on t are simply denoted as S, E, I, A, R in the SEIAR model.
Descriptions of the first and second periods by SEIAR and SEIAR-CQFH models. Panel a: Epidemiological descriptions of exposed (e), symptomatic infected (I) and asymptomatic infected (A) individuals. Panel b: Descriptions of the first period (December 7 to January 22) by SEIAR model. Panel c: Descriptions of the second period (January 23 to March 18) by SEIAR-CQFH model
For the second period, we extended the SEIAR model to enroll clinically diagnosed cases in community isolation (C), quarantine point isolation (Q), Fangcang shelter hospitals (F) and designated hospitals (H). The SEIAR-CQFH model is described as follows (Fig. 1c):
$$ \left\{\begin{array}{l}\frac{\mathrm{d}S}{\mathrm{d}\mathrm{t}}=-\beta (t)\left(I+{f}_AA+{f}_EE+{f}_CC\right)\frac{S}{N},\\ {}\frac{\mathrm{d}E}{\mathrm{d}\mathrm{t}}=\beta (t)\left(I+{f}_AA+{f}_EE+{f}_CC\right)\frac{S}{N}-\sigma E,\\ {}\frac{\mathrm{d}I}{\mathrm{d}\mathrm{t}}=\rho \sigma E-{\gamma}_II-{\alpha}_II-\delta I,\\ {}\frac{\mathrm{d}A}{\mathrm{d}\mathrm{t}}=\left(1-\rho \right)\sigma E-{\gamma}_AA,\\ {}\frac{\mathrm{d}C}{\mathrm{d}\mathrm{t}}=\delta I-{\gamma}_CC-{C}_H(t)-{C}_Q(t)-{C}_F(t),\\ {}\frac{\mathrm{d}Q}{\mathrm{d}\mathrm{t}}={C}_Q(t)-\left(1-{\rho}_Q\right){\gamma}_QQ-{Q}_F(t)-{Q}_H(t),\\ {}\frac{\mathrm{d}F}{\mathrm{d}\mathrm{t}}={Q}_F(t)+{C}_F(t)-\left(1-{\rho}_F\right){\gamma}_FF-{F}_H(t),\\ {}\frac{\mathrm{d}H}{\mathrm{d}\mathrm{t}}={C}_H(t)+{Q}_H(t)+{F}_H(t)-{\gamma}_HH-{\alpha}_HH,\\ {}\frac{\mathrm{d}R}{\mathrm{d}\mathrm{t}}={\gamma}_II+{\gamma}_AA+{\gamma}_CC+\left(1-{\rho}_Q\right){\gamma}_QQ+\left(1-{\rho}_F\right){\gamma}_FF+{\gamma}_HH,\end{array}\right. $$
where S(t), E(t), I(t), A(t), R(t), N have the same definitions as those in the SEIAR model, and C(t), Q(t), F(t), H(t) are the number of community isolation, quarantine point isolation, Fangcang shelter hospitals, and designated hospitals at time t, respectively. More details about the model parameters and function settings are presented in the supplementary.
Some parameters were determined from existing references (refer to Table 1 for details), and seven unknown parameters (β0, βend, r, αI, αH, δcq, θ) were estimated by the nonlinear least-squares (NLES) method based on newly confirmed, cumulative confirmed, and cumulative COVID-19 death cases in Wuhan from January 23 to March 18. The confidence intervals of the parameters were calculated using the stochastic simulation method (Table 1).
Table 1 Parameters used in the SEIAR and SEIAR-CQFH models
Basic reproduction number (BRN) for exposed, asymptomatic, symptomatic, and community isolated infected individuals
The basic reproduction number (BRN), defined as the expected average number of secondary cases in a completely susceptible population through a typical infective individual during the infectious period, is one of the most significant concepts in population biology [13, 14]. More importantly, it often determines the threshold behavior of many epidemic models. A disease typically dies out if the BNR is less than unity and spreads in the population otherwise. Hence, this parameter is commonly used to measure the effort required to control the spread of an infectious disease in epidemiology. We applied the next generation matrix to estimate the BRN, \( {\mathcal{R}}_0(t) \) with control measures in forcing as follows:
$$ {\mathcal{R}}_0(t)=\Big\{\begin{array}{ll}\frac{{\rho \beta}_0}{\gamma_I+{\alpha}_I}+\frac{f_E{\beta}_0}{\sigma }+\frac{\left(1-\rho \right){f}_A{\beta}_0}{\gamma_A},& \mathrm{BeforeJan}23,\\ {}\frac{\rho \beta (t)}{\gamma_I+\delta +{\alpha}_I}+\frac{f_E\beta (t)}{\sigma }+\frac{\left(1-\rho \right){f}_A\beta (t)}{\gamma_A}+\frac{f_C\rho \delta \beta (t)}{\left({\gamma}_I+{\alpha}_I+\delta \right)\left({\gamma}_C+\left({C}_Q(t)+{C}_F(t)+{C}_H(t)\right)/C(t)\right)},& \mathrm{AfterJan}23.\end{array}\operatorname{} $$
In addition, the BRN of exposed, asymptomatic, symptomatic, and community- isolated infected cases are as follows:
$$ {\mathcal{R}}_0^E(t)=\left\{{\displaystyle \begin{array}{ll}\frac{f_E{\beta}_0}{\sigma },& \mathrm{Before}\ \mathrm{Jan}\ 23,\\ {}\frac{f_E\beta (t)}{\sigma },& \mathrm{After}\ \mathrm{Jan}\ 23,\end{array}}\kern0.5em {\mathcal{R}}_0^A(t)=\right\{{\displaystyle \begin{array}{ll}\frac{\left(1-\rho \right){f}_A{\beta}_0}{\gamma_A},& \mathrm{Before}\ \mathrm{Jan}\ 23,\\ {}\frac{\left(1-\rho \right){f}_A\beta (t)}{\gamma_A},& \mathrm{After}\ \mathrm{Jan}\ 23,\end{array}}{\mathcal{R}}_0^I(t)=\Big\{{\displaystyle \begin{array}{ll}\frac{\rho {\beta}_0}{\gamma_I+{\alpha}_I},& \mathrm{Before}\ \mathrm{Jan}\ 23,\\ {}\frac{\rho \beta (t)}{\gamma_I+\delta +{\alpha}_I},& \mathrm{After}\ \mathrm{Jan}\ 23,\end{array}} $$
$$ {\mathcal{R}}_0^C(t)=\frac{f_C\rho \delta \beta (t)}{\Big(\left({\gamma}_I+{\alpha}_I+\delta \right)\left({\gamma}_C+\left({C}_Q(t)+{C}_F(t)+{C}_H(t)\right)/C(t)\right)},\mathrm{After}\ \mathrm{Jan}\ 23. $$
The BRNs and their confidence intervals were calculated from the above formulas based on the 1000 groups of estimated values.
Patients were not involved in the design of this study.
SEIAR and SEIA-CQFH models simulated two-period epidemics
The numbers of newly confirmed cases, cumulative-confirmed cases, and deaths (0, 50,005, and 2496) until March 18, 2020 reported in Wuhan are basically consistent with the models simulated results (21, 50,926, and 2590), indicating that the real data and values predicted by the model were well simulated (Fig. 2 a-c). In addition, after adopting various prevention and control measures, although the BRN fluctuated slightly from February 3, 2020 to February 9, 2020, the overall trend was as follows: the BRN decreased from 4.50 on January 22, 2020 to 0.18 on March 18, 2020. Specifically, the BRNs of symptomatic infected individuals, asymptomatic infected individuals, and exposed individuals decreased from 3.57, 0.18, and 0.75 on January 23, 2020 to 0.04, 0.02, and 0.10 on March 18, 2020. In addition, although the BRN of community-isolation symptomatic infected individuals increased slightly from 0.24 on January 24, 2020 to 0.41 on February 2, 2020, the BRN continued to decrease from 0.27 on February 8, 2020 to 0.02 on March 18, 2020 (Fig. 2d). In addition, the SEIAR-CQFH model simulated the transmission rule of the epidemic after January 23, 2020, and found that the increase in the numbers of cumulative confirmed cases and deaths was smooth, except for February 12, 2020 (Fig. 2a). The reason for the surge was that on February 12, 2020, clinical diagnosis cases were enrolled as confirmed cases.
Number of COVID-19 cases and the BRN simulated by SEIAR and SEIAR-CQFH models. Panel a: Comparison between number of new cases simulated by model and real data. Panel b: Comparison between cumulative number of cases simulated by model and real data. Panel c: Comparison between number of deaths simulated by model and real data. Panel d: The BRNs of different types of cases
Assessment of Fangcang shelter hospitals
The SEIAR-CQFH model simulated the number of beds in Fangcang shelter hospitals. With this model, we confirmed that if the number of beds was reduced by 1/2 or 3/4, the growth ranges of the numbers of cumulative confirmed cases and deaths increased obviously, especially the cumulative deaths. Specifically, if the numbers of beds in Fangcang shelter hospitals were 0, 1/4, 1/2, and normal, the cumulative number of cases on March 18, 2020 would be 60,389, 56,924, 54,430, and 50,925 (real data: 50,005), respectively. By March 18, 2020, there would be 18.58, 11.78, and 6.88% more cumulative confirmed cases in the cases of 0, 1/4, and 1/2 beds, compared to the condition with normal beds (Fig. 3a). Compared to the cumulative number (425) of cases on January 23, 2020, the month-on-month growths in the cases of 0, 1/4, 1/2, and normal beds were 12,533.73, 12,082.60, 11,683.11, and 11,049.31%, respectively (Fig. 3a). In addition, if the numbers of beds in Fangcang shelter hospitals were 0, 1/4, 1/2, and normal, the numbers of deaths on 18 March, 2020 were 3929, 3399, 3019, and 2590 (real data: 2495), respectively. By March 18, 2020, there would be 51.73, 31.25, and 16.59% more cumulative deaths in the cases of 0, 1/4, and 1/2 beds, respectively, compared to the condition with normal beds. Compared to the death number (190) on January 23, 2020, the month-on-month growths in the cases of 0, 1/4, 1/2, and normal beds were 954.56, 927.92, 872.48, and 779.35%, respectively (Fig. 3b ).
Cumulative numbers of cases and deaths vary with the beds number in Fangcang shelter hospitals. Panel a: Cumulative number of COVID-19 cases. Panel b: Cumulative number of COVID-19 deaths
Assessment of designated hospitals
The impact of the number of beds in designated hospitals is similar to that in Fangcang shelter hospitals in the COVID-19 epidemic; however, it is important to note that reducing the number of beds in designated hospitals would lead to a more significant increase in confirmed cases. Specifically, if the numbers of beds in designated hospitals are 1/4, 1/2, and normal, the cumulative numbers of confirmed cases on March 18, 2020 would be 141,594, 97,829, and 50,926 (real data: 50,005), respectively. By March 18, 2020, there would be 178.04 and 92.1% more cumulative confirmed cases for 1/4 and 1/2 beds, respectively, compared to the condition with normal beds. Compared to the cumulative number of cases (425) on January 23, 2020, the month-on-month growths for 1/4, 1/2, and normal beds were 23,234.81, 18,784.59, and 11,049.31%, respectively. Although the number of cases increased slightly with the increase in bed size in the early period, the increase in the number of beds significantly inhibited the increase in the number of cases in the later period (Fig. 4).
Cumulative number of COVID-19 cases vary with the number of beds in designated hospitals
Assessment of the joint measures led by Fangcang shelter hospitals
We used the SEIAR-CQFH model to estimate the impact of the time interval from illness onset to hospital visit at 1 day, 2 days, and 4 days in the COVID-19 epidemic and found that the time interval from illness onset to hospital visit was of importance for the epidemic. Compared to 1 and 2 days, the time interval from illness onset to hospital visit was 4 days, resulting in a significant increase in the number of confirmed cases and deaths. Specifically, the cumulative numbers of cases and deaths as of March 18, 2020 were 54,350 and 2750, respectively, an increase of 6.73, 4.29 and 6.19%, 3.31% in comparison with that of 1 day and 2 days, respectively (Fig. 5a-b).
Cumulative numbers of COVID-19 cases and deaths vary with the time intervals. Panel a: Cumulative number of cases varies with the time intervals from illness onset to hospital visit. Panel b: Cumulative number of deaths varies with the time intervals from illness onset to hospital visit. Panel c: Cumulative number of cases varies with the time intervals from hospital visit to diagnosis. Panel d: Cumulative number of deaths varies with the time intervals from hospital visit to diagnosis
The increased impact of the time interval from hospital visit to diagnosis was more pronounced on the numbers of confirmed cases and deaths. Specifically, before February 12, 2020, the time interval from hospital visit to diagnosis had lesser effect on the cumulative numbers of cases and deaths. After February 12, 2020, the shorter the time interval from hospital visit to diagnosis, the fewer the accumulated cases. As of March 18, 2020, the cumulative numbers of cases were 50,926, 55,334, and 64,863, respectively, for 1 day, 2 days, and 4 days, respectively. In addition, the cases were more significantly affected after February 22, 2020, that is, the shorter the time interval from hospital visit to diagnosis, the fewer the number of deaths, and the gap increased with time. As of March 18, 2020, the total numbers of deaths were 2590 (real data: 2496), 26,280 and 27,740 for 1, 2, 4 days, respectively (Fig. 5c-d).
In general, in view of the severity and public concern of the COVID-19 epidemic, the onset individuals usually visit the hospitals soon; other medical services including the time for diagnosis, number of beds in Fangcang shelter hospitals, and number of beds in designated hospitals were evaluated and required by the Chinese government. Therefore, we put the dynamic change of the above variables, except the variables for illness onset to visit hospitals, into the model to simulate the effect of changes of multiple variables on the cumulative number of cases to evaluate the joint measures. The results indicate that the numbers of beds in Fangcang hospitals and designated hospitals are normal, and both time intervals from illness onset to hospital visit and diagnosis are 1 day; the cumulative number of cases is the fewest and basically consistent with the real data (50,926; real data: 50,005). The number of Fangcang shelter hospitals is 0, the number of beds in designated hospitals is 1/4, the time interval from onset to hospital visit is 1 day, and the time interval from hospital visit to diagnosis is 4 days, the cumulative number of cases would be the highest (187,904). For the other combinations' types of medical services, the cumulative number of cases caused is higher than the actual number of prevention and control measures (Fig. 6).
Cumulative number of COVID-19 cases varies with the joint prevention and control measures
Considering the asymptomatic cases in the classic transmission dynamic model SEIR, we simulated the transmission rule of epidemic using an extended SEIAR model before and after January 23, 2020 and added four cabins, including community isolation, quarantine point isolation, Fangcang shelter hospitals, and designated hospitals to construct the SEIAR-CQFH model. The simulated results indicate that the numbers of new cases (21), cumulative confirmed cases (50,926), and deaths (2590) until March 18, 2020 were estimated by two-stage SEIR and SEIAR-CQFH models, which are close to the data published by the government (0, 50,005 and 2496) [1, 3, 5]. In addition, for any day from December 7, 2019 to March 18, 2020, the real data was matched with the simulated data, indicateding that the two-stage models were appropriate for COVID-19 transmission. In addition, the BRN decreased from 4.50 on January 23, 2020, to 0.18 on March 18, 2020, which is also consistent with other reports [15,16,17]. In addition, the BRNs of symptomatic infected cases, asymptomatic infected cases, exposed individuals, and isolated infected cases decreased from January 23, 2020 to March 18, 2020. The time when the BRN began to decline was basically the same as the time when the national medical teams assisted Wuhan and the lockdown of Wuhan [18, 19] and indicates that the effectiveness of joint prevention and control strategies adopted by the Chinese government. Therefore, we used the SEIAR-CQFH model to quantitatively assess the measures of Fangcang shelter hospitals, designated hospitals, and the time intervals from onset illness to diagnosis in the COVID-19 epidemic after January 23, 2020 in Wuhan city in mainland China.
In the joint prevention and control of the COVID-19 epidemic, Fangcang shelter hospitals play important isolation and triage roles as intermediate platforms. At the beginning of February 2020, designated hospitals in Wuhan did not have enough beds for COVID-19 patients, especially for thousands of patients with mild to moderate COVID-19 symptoms [2, 3]. Mild and moderate patients could be isolated in the community; however, epidemiological studies revealed that in China, COVID − 19 has a high rate of intrafamily transmission [2, 10, 20,21,22,23], and more than 50% of COVID-19 patients had at least one family member with the disease [2]. In addition, it is difficult to monitor disease progress in community isolation, and asymptomatic infected individuals may deteriorate to having mild and moderate symptoms [21,22,23]. Our study also confirmed that the BRN for community isolation symptomatic infected individuals increased from January 24 to February 2, 2020, while other BRNs decreased. This indicates that community isolation was not effective. On Feb 2, 2020, Wuhan asked community isolation individuals, newly suspected individuals, and close contacts to move to designated point isolations. After 3 days, on February 5, 2020, Wuhan successively opened 16 Fangcang shelter hospitals to treat mild and moderate patients. This implementation of this measure was conducive to the rapid isolation and triage of mild and moderate cases. Therefore, from February 5, 2020, the BRN of community-isolated symptomatic infected individuals exhibited a continuous downward trend.
In addition to isolation, triage, basic medical care, frequent monitoring, and rapid referral were also the original intentions of the establishment of the Fangcang shelter hospitals [2, 3]. Our study also confirmed the important role of Fangcang shelter hospitals in the early treatment of COVID-19, and the results show that without Fancang shelter hospitals, the cumulative numbers of cases and deaths would increase by 18.58 and 51.73% by March 18, 2020. In addition, if the number of beds was reduced to 1/2 or 1/4, the cumulative numbers of cases and deaths would increase by 6.88 and 11.78% or 16.59 and 31.25%, respectively. Moreover, one of the important functions of Fancang shelter hospitals was monitoring and rapid referral [2], which enabled severe COVID-19 cases to be treated in the shortest time and increased the survival possibility. The treatment of severe cases was inseparable from the designated hospitals, designated hospitals also played an important role as a high-level platform in hierarchical prevention and control. Our study also showed that if the number of beds in the designated hospitals decreased by 1/2 or 1/4, the number of COVID-19 cases would increase significantly from 50,926 to 97,829 and 141,594, respectively. After January 25, 2020, with continuous increase in medical materials and medical staff, the number of beds in designated hospitals increased, which increased the treatment opportunities of severe cases and reduced the death of severe cases of COVID-19.
Shortening the time interval between hospital transfers can increase the survival possibility in severe cases, similarly, shortening the time intervals from illness onset to hospital visit and confirmation can also reduce the deaths and transmission. Our study showed that the numbers of deaths and culminative cases significantly decreased after reducing the time intervals from illness onset to hospital visit and from hospital visit to confirmation. We used models to simulate the effect of joint measures and found that if the number of beds in Fangcang hospitals and designated hospitals were normal, and both the time intervals from illness onset to hospital visit and diagnosis were 1 day, the cumulative number of cases was the fewest and basically consistent with the real data (50,926; real data: 50,005). Therefore, the measure at that period was optimal; if Fangcang shelter hospitals were not established, the number of beds reduced 1/4 and the time interval was 4 days, the cumulative number of cases would increase by 268.97%. This further verifies the importance of joint measures in COVID-19 epidemic prevention and control, and thus, the measures deserve to be rolled out globally. In addition, the numbers of beds in Fangcang shelter hospitals and designated hospitals and the time interval of diagnosis were the best combination, as determined by the detailed and professional evaluation by the Chinese government, it is the best and fastest choice after the full evaluation of the materials, personnel, and other conditions under the increasingly severe situation of the COVID-19 epidemic. Our study also verified this result. However, we must consider and sum up how we can learn from the COVID-19 epidemic response and improve the ability to deal with emerging infectious diseases, especially when medical resources are limited.
This study has some limitations. First, it did not quantitatively assess the effectiveness of community isolation and quarantine point isolation because of the difficulty encountered in collecting the related data set. Second, we were unable to collect the real-time number of beds in the central isolation point, and we used a fixed number published by the National Health and Planning Commission.
In conclusion, our study provides a detailed quantitative assessment of the effects of Fangcang shelter hospitals, designated hospitals, and time intervals from illness onset to hospital visit and diagnosis of COVID-19 in Wuhan city, mainland China, especially the role of Fangcang shelter hospitals. The results indicate that Fangcang shelter hospitals, similar to designated hospitals, played an irreplaceable contribution to the control of the COVID-19 epidemic; moreover, the combination of measures, including the normal number of beds in Fangcang hospitals, was optimum, making the prevention and control strategies more effective. Lastly, although the COVID-19 epidemic has been basically brought under control in China and Fangcang shelter hospitals have been closed, we still cannot take it lightly. We should summarize the prevention and control experience of COVID-19, and provide more scientific methods for the Chinese and even global people in response to the outbreak of emerging infectious diseases, especially for countries and regions with limited medical resources.
Our data are from publicly published data, have no privacy implications and can be founded in http://wjw.wuhan.gov.cn/.
World Health Organization (WHO). Coronavirus disease (COVID-19) Pandemic. https://www.who.int/emergencies/diseases/novel-coronavirus-2019. (Date accessed: May 14, 2020).
Chen S, Zhang Z, Yang J, et al. Fangcang shelter hospitals: a novel concept for responding to public health emergencies. Lancet. 2020;359:1305–14.
Wuhan Municipal Health Commission. COVID-19. http://wjw.wuhan.gov.cn/. (Date accessed: May 14, 2020).
Sina. Physicians supporting Hubei. http://blog.sina.com.cn/s/blog_45bb8ce70102yory.html. (Date accessed: May 14, 2020).
National Health Commission of the People's Republic of China. COVID-19. http://www.nhc.gov.cn/. (Date accessed: May 14, 2020).
Brauer F, Castillo-Chavez C. Mathematical models in population biology andepidemiology. New York: Springer; 2012. https://doi.org/10.1007/978-1-4614-1686-9.
Chen Y, Wang A, Yi B, et al. Epidemiological characteristics of infection inCOVID-19 close contacts in Ningbo city (in Chinses). Chin J Epidemiol. 2020;41(5):667–71.
Backer J, Klinkenberg D, Wallinga J. Incubation period of 2019 novel coronavirus (2019-nCoV) infections among travellers from Wuhan, China, 20–28 January 2020. Eurosurveillance. 2020;25(5):200006.
Lauer SA, Grantz KH, Bi Q, Jones FK, Zheng Q, Meredith HR, et al. The incubation period of coronavirus disease 2019 (COVID-19) from publicly reported confirmed cases: estimation and application. Ann Intern Med. 2020;172(9):577–82. https://doi.org/10.7326/M20-0504.
Special Expert Group for Control of the Epidemic of Novel Coronavirus Pneumonia of the Chinese Preventive Medicine Association. An update on theepidemiological characteristics of novel coronavirus pneumonia (COVID-19) (in Chinese). Chin J Epidemiol. 2020;41(2):139–44.
Qiu J. Covert coronavirus infections could be seeding new outbreaks. Nature. 2020. https://doi.org/10.1038/d41586-020-00822-x.
Sina. The deteriorating rate from mild or moderate. http://k.sina.com.cn/article_7211561239_m1add7b11703300wsxs.html?from=health. (Date accessed: May 14, 2020).
Anderson R, May R. Infectious diseases of humans: dynamics and control. Oxford: Oxford University Press; 1992.
Diekmann O, Heesterbeek J. Mathematical epidemiology of infectious diseases: model building, analysis and interpretation. Chichester: John Wiley & Sons; 2000.
Sanche S, Lin Y T, Xu C, et al. High contagiousness and rapid spread of severe acute respiratory syndrome coronavirus 2. Emer infect Dis. 2020;26(7):1470–7.
Wu J, Leung K, Leung G. Nowcasting and forecasting the potential domestic and international spread of the 2019-nCoV outbreak originating in Wuhan, China: a modelling study. Lancet. 2020;395(10225):689–97. https://doi.org/10.1016/S0140-6736(20)30260-9.
Zhou T, Liu Q, Yang Z, Liao J, Yang K, Bai W, et al. Preliminary prediction of the basic reproduction number of the Wuhan novel coronavirus 2019-nCoV. J Evid Based Med. 2020;13(1):3–7. https://doi.org/10.1111/jebm.12376.
The Central People's Government of the People's Republic of China. Announcement of novel coronavirus infection prevention command headquarters in Wuhan. Website of H ubei Provincial Government 2020. http://www.gov.cn/xinwen/2020-01/23/content_5471751.htm. (Date accessed: May 14, 2020).
National Health Commission of the People's Republic of China. National Health and Health Commission organized several medical teams to reinforce Wuhan (News). Website of Hubei Provincial Government 2020. http://www.hubei.gov.cn/zhuanti/2020/gzxxgzbd/gdzs/202001/t20200125_2014882.shtml. (Date accessed: May 14, 2020).
Tian S, Hu N, Lou J, et al. Characteristics of COVID-19 infection in Beijing. J Infec. 2020;80(4):401–6.
WHO Report of the WHO-China Joint Mission on coronavirus disease 2019 (COVID-19). https://www.who.int/docs/default-source/coronaviruse/who-china-joint-mission-on-covid-19-final-report.pdf. (Date accessed: May 14, 2020).
McNeil Jr, DG. Inside China's all-out war on the coronavirus https://www.nytimes.com/2020/03/04/health/coronavirus-china-aylward.html. (Date accessed: May 14, 2020).
Maddow R. How a country serious about coronavirus does testing and quarantine. https://www.msnbc.com/rachel-maddow/watch/how-a-country-serious-about-coronavirus-does-testing-and-quarantine-80595013902. (Date accessed: May 14, 2020).
This study was funded by grants from the National Science and Technology Major Project of China (2018ZX10302302-001-004); National Key Research and Development Program (2018YFC2000300); National Natural Science Foundation of China (U1903118); National Natural Science Foundation of China (11631012); National Natural Science Foundation of China (11971478).
Wangli Xu and Weimin Li are Co-senior authors on this study.
Hui Jiang, Pengfei Song, Siyi Wang are contributed equally to this work.
Beijing Chest Hospital, Capital Medical University, Beijing, 101149, China
Hui Jiang, Jinfeng Yin, Chendi Zhu & Weimin Li
Beijing Tuberculosis and Thoracic Tumor Research Institute, Beijing, 101149, China
Hui Jiang, Chendi Zhu & Weimin Li
School of Mathematics and Statistics, Xi'an Jiaotong University, Xi'an, 710049, ShaanXi, China
Pengfei Song & Shuangshuang Yin
Center for Applied Statistics, School of Statistics, Renmin University of China, Beijing, 100872, China
Siyi Wang & Wangli Xu
Beijing Youan Hospital, Capital Medical University, Beijing, 100069, China
Chao Cai
Beijing Municipal Key Laboratory of Clinical Epidemiology, School of Public Health, Capital Medical University, Beijing, 100069, China
Hui Jiang
Pengfei Song
Siyi Wang
Shuangshuang Yin
Jinfeng Yin
Chendi Zhu
Wangli Xu
Li W and Xu W conceived, designed, and supervised the study. Song P, Wang S, Zhu C and Cai C collected and cleaned the data. Jiang H and Song P analyzed the data. Jiang H and Song P wrote the drafts of the manuscript. Jiang H and Li W interpreted the findings. Li W and Xu W commented on and revised the drafts of the manuscript. All authors read and approved the final manuscript.
Correspondence to Wangli Xu or Weimin Li.
All authors declare no competing interests.
Additional file 1: Figure S1.
The number of maximum open beds of Fangcang shelter hospitals (A), designated hospitals (B) and the number of physicians (C) supporting Hubei from outside Hubei. Figure S2. The maximum open beds per day of quarantine points, Fangcang shelter hospitals and designated hospitals.
Jiang, H., Song, P., Wang, S. et al. Quantitative assessment of the effectiveness of joint measures led by Fangcang shelter hospitals in response to COVID-19 epidemic in Wuhan, China. BMC Infect Dis 21, 626 (2021). https://doi.org/10.1186/s12879-021-06165-w
Fangcang shelter hospital
Quantitative assessment
|
CommonCrawl
|
MiRNA-15a Mediates Cell Cycle Arrest and Potentiates Apoptosis in Breast Cancer Cells by Targeting Synuclein-γ
Li, Ping (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Xie, Xiao-Bing (Medical Laboratory Center, First Affiliated Hospital, Hunan University of Chinese Medicine) ;
Chen, Qian (Department of Molecular Biology and Genetics, Johns Hopkins University School of Medicine) ;
Pang, Guo-Lian (Department of Pathology, First People Hospital of Qujing) ;
Luo, Wan (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Tu, Jian-Cheng (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Zheng, Fang (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Liu, Song-Mei (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Han, Lu (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University) ;
Zhang, Jian-Kun (Department of Pathology, First People Hospital of Qujing) ;
Luo, Xian-Yong (Department of Pathology, First People Hospital of Qujing) ;
Zhou, Xin (Center for Gene Diagnosis, Zhongnan Hospital of Wuhan University)
Background: Recent studies have indicated that microRNA-15a (miR-15a) is dysregulated in breast cancer (BC). We aimed to evaluate the expression of miR-15a in BC tissues and corresponding para-carcinoma tissues. We also focused on effects of miR-15a on cellular behavior of MDA-MB-231 and expression of its target gene synuclein-${\gamma}$ (SNCG). Materials and Methods: The expression levels of miR-15a were analysed in BC formalin fixed paraffin embedded (FFPE) tissues by microarray and quantitative real-time PCR. CCK-8 assays, cell cycle and apoptosis assays were used to explore the potential functions of miR-15a in MDA-MB-231 human BC cells. A luciferase reporter assay confirmed direct targets. Results: Downregulation of miR-15a was detected in most primary BCs. Ectopic expression of miR-15a promoted proliferation and suppressed apoptosis in vivo. Further studies indicated that miR-15a may directly interact with the 3'-untranslated region (3'-UTR) of SNCG mRNA, downregulating its mRNA and protein expression levels. SNCG expression was negatively correlated with miR-15a expression. Conclusions: MiR-15a has a critical role in mediating cell cycle arrest and promoting cell apoptosis of BC, probably by directly targeting SNCG. Thus, it may be involved in development and progression of BC.
microRNA-15a;breast cancer;SNCG;cell cycle;apoptosis
Ambros V (2001). MicroRNAs: tiny regulators with great potential. Cell, 107, 823-6. https://doi.org/10.1016/S0092-8674(01)00616-X
Bartel DP (2004). MicroRNAs: Genomics, biogenesis, mechanism, and function. Cell, 116, 281-97. https://doi.org/10.1016/S0092-8674(04)00045-5
Cai CK, Zhao GY, Tian LY, et al (2012). miR-15a and miR-16-1 downregulate CCND1 and induce apoptosis and cell cycle arrest in osteosarcoma. Oncol Rep, 28, 1764-70.
Chen C, Ridzon DA, Broomer AJ, et al (2005). Real-time quantification of microRNAs by stem-loop RT-PCR. Nucleic Acids Res, 33, 179. https://doi.org/10.1093/nar/gni178
Chen Q, Xia HW, Ge XJ, et al (2013). Serum miR-19a predicts resistance to FOLFOX chemotherapy in advanced colorectal cancer cases. Asian Pac J Cancer Prev, 14, 7421-6. https://doi.org/10.7314/APJCP.2013.14.12.7421
Cortez MA, Welsh JW, Calin GA (2012). Circulating microRNAs as noninvasive biomarkers in breast cancer. Recent Results Cancer Res, 195, 151-61. https://doi.org/10.1007/978-3-642-28160-0_13
Ferlay J, Shin HR, Bray F, et al (2010). Estimates of worldwide burden of cancer in 2008: GLOBOCAN 2008. Int J Cancer, 127, 2893-917. https://doi.org/10.1002/ijc.25516
Garzon R, Heaphy CE, Havelange V, et al (2009). MicroRNA 29b functions in acute myeloid leukemia. Blood, 114, 5331-41. https://doi.org/10.1182/blood-2009-03-211938
Guo J, Shou C, Meng L, et al (2007). Neuronal protein synuclein gamma predicts poor clinical outcome in breast cancer. Int J Cancer, 121, 1296-305. https://doi.org/10.1002/ijc.22763
Harquail J, Benzina S, Robichaud GA (2012). MicroRNAs and breast cancer malignancy: an overview of miRNA-regulated cancer processes leading to metastasis. Cancer Epidemiol Biomarkers Prev, 11, 269-80.
Huang F, Liu C, Shi YH, et al (2013). MicroRNA-101 inhibits cell proliferation, invasion, and promotes apoptosis by regulating cyclooxygenase-2 in Hela cervical carcinoma cells. Asian Pac J Cancer Prev, 14, 5915-20. https://doi.org/10.7314/APJCP.2013.14.10.5915
Jiang Y, Liu YE, Goldberg ID, et al (2004). A novel heat-shock protein-associated chaperone, stimulates ligand-dependent estrogen receptor $\alpha$ signaling and mammary tumorigenesis. Cancer Res, 64, 4539-46. https://doi.org/10.1158/0008-5472.CAN-03-3650
Jiang Y, Liu YE, Lu A, et al (2003). Stimulation of estrogen receptor signaling by gamma synuclein. Cancer Res, 63, 3899-903.
Lian J, Zhang X, Tian H, et al (2009). Altered microRNA expression in patients with non-obstructive zoospermia. Reprod Biol Endocrinol, 7, 13. https://doi.org/10.1186/1477-7827-7-13
Liu YE, Pu W, Jiang Y, et al (2007). Chaperoning of estrogen receptor and induction of mammary gland proliferation by neuronal protein synuclein gamma. Oncogene, 26, 2115-25. https://doi.org/10.1038/sj.onc.1210000
Luo Q, Li X, Li J, et al (2013). MiR-15a is underexpressed and inhibits the cell cycle by targeting CCNE1 in breast cancer. Int J Oncol, 43, 1212-8.
Ng SB, Yan J, Huang G, et al (2011). Dysregulated microRNAs affect pathways and targets of biologic relevance in nasaltype natural killer/T-cell lymphoma. Blood, 118, 4919-29. https://doi.org/10.1182/blood-2011-07-364224
Paik JH, Jang JY, Jeon YK, et al (2011). MicroRNA-146a Downregulates $NF{\kappa}B$ Activity via Targeting TRAF6 and Functions as a Tumor Suppressor Having Strong Prognostic Implications in NK/T Cell Lymphoma. Clin Cancer Res, 17, 4761-71. https://doi.org/10.1158/1078-0432.CCR-11-0494
Qin XJ, Ling BX (2012). Proteomic studies in breast cancer (Review). Oncol Lett, 3, 735-43.
Shenouda SK, Alahari SK (2009). MicroRNA function in cancer: oncogene or a tumor suppressor? Cancer Metastasis Rev, 28, 369-78. https://doi.org/10.1007/s10555-009-9188-5
Thorsen SB, Obad S, Jensen NF, et al (2012). The therapeutic potential of microRNAs in cancer. Cancer J, 18, 275-84. https://doi.org/10.1097/PPO.0b013e318258b5d6
Valencia-Sanchez MA, Liu J, Hannon GJ, et al (2006). Control of translation and mRNA degradation by miRNAs and siRNAs. Genes Dev, 20, 515-24. https://doi.org/10.1101/gad.1399806
Wang H, Tan G, Dong L, et al (2012). Circulating MiR-125b as a marker predicting chemoresistance in breast cancer. PLoS One, 7, 34210. https://doi.org/10.1371/journal.pone.0034210
Wei F, Xu J, Tang L, et al (2012). p27 (Kip1) V109G polymorphism and cancer risk: a systematic review and meta-analysis. Cancer Biother Radiopharm, 27, 665-71.
Wu K, Quan Z, Weng Z, et al (2007). Expression of neuronal protein synuclein gamma gene as a novel marker for breast cancer prognosis. Breast Cancer Res Treatment, 101, 259-67. https://doi.org/10.1007/s10549-006-9296-7
Yamanaka Y, Tagawa H, Takahashi N, et al (2009). Aberrant overexpression of microRNAs activate AKT signaling via down-regulation of tumor suppressors in natural killer-cell lymphoma/leukemia. Blood, 114, 3265-75. https://doi.org/10.1182/blood-2009-06-222794
Yu Q, Liu SL, Wang H, et al (2013). miR-126 Suppresses the proliferation of cervical cancer cells and alters cell sensitivity to the chemotherapeutic drug bleomycin. Asian Pac J Cancer Prev, 14, 6569-72. https://doi.org/10.7314/APJCP.2013.14.11.6569
MiR-492 contributes to cell proliferation and cell cycle of human breast cancer cells by suppressing SOX7 expression vol.36, pp.3, 2015, https://doi.org/10.1007/s13277-014-2794-z
miR-365 overexpression promotes cell proliferation and invasion by targeting ADAMTS-1 in breast cancer vol.47, pp.1, 2015, https://doi.org/10.3892/ijo.2015.3015
Downregulation of microRNA-497 is associated with upregulation of synuclein γ in patients with osteosarcoma vol.12, pp.6, 2016, https://doi.org/10.3892/etm.2016.3838
MicroRNA-15a inhibits the growth and invasiveness of malignant melanoma and directly targets on CDCA4 gene vol.37, pp.10, 2016, https://doi.org/10.1007/s13277-016-5271-z
Circulating long noncoding RNA GAS5 as a potential biomarker in breast cancer for assessing the surgical effects vol.37, pp.5, 2016, https://doi.org/10.1007/s13277-015-4568-7
miR-15a-5p acts as an oncogene in renal cell carcinoma vol.15, pp.3, 2017, https://doi.org/10.3892/mmr.2017.6121
MicroRNA-130b targets PTEN to mediate drug resistance and proliferation of breast cancer cells via the PI3K/Akt signaling pathway vol.7, pp.2045-2322, 2017, https://doi.org/10.1038/srep41942
Downregulation of microRNA-15a suppresses the proliferation and invasion of renal cell carcinoma via direct targeting of eIF4E vol.38, pp.4, 2017, https://doi.org/10.3892/or.2017.5901
|
CommonCrawl
|
15th International Conference On Bioinformatics (InCoB 2016): Bioinformatics
A study on the application of topic models to motif finding algorithms
Josep Basha Gutierrez1,2 &
Kenta Nakai1,2
Topic models are statistical algorithms which try to discover the structure of a set of documents according to the abstract topics contained in them. Here we try to apply this approach to the discovery of the structure of the transcription factor binding sites (TFBS) contained in a set of biological sequences, which is a fundamental problem in molecular biology research for the understanding of transcriptional regulation. Here we present two methods that make use of topic models for motif finding. First, we developed an algorithm in which first a set of biological sequences are treated as text documents, and the k-mers contained in them as words, to then build a correlated topic model (CTM) and iteratively reduce its perplexity. We also used the perplexity measurement of CTMs to improve our previous algorithm based on a genetic algorithm and several statistical coefficients.
The algorithms were tested with 56 data sets from four different species and compared to 14 other methods by the use of several coefficients both at nucleotide and site level. The results of our first approach showed a performance comparable to the other methods studied, especially at site level and in sensitivity scores, in which it scored better than any of the 14 existing tools. In the case of our previous algorithm, the new approach with the addition of the perplexity measurement clearly outperformed all of the other methods in sensitivity, both at nucleotide and site level, and in overall performance at site level.
The statistics obtained show that the performance of a motif finding method based on the use of a CTM is satisfying enough to conclude that the application of topic models is a valid method for developing motif finding algorithms. Moreover, the addition of topic models to a previously developed method dramatically increased its performance, suggesting that this combined algorithm can be a useful tool to successfully predict motifs in different kinds of sets of DNA sequences.
Sequence motifs are short patterns that occur in DNA with certain frequency and that often have some sort of biological distinct function. In most cases, that function is to serve as a binding site for proteins. When these proteins are transcription factors (TF), the corresponding motifs are called transcription factor binding sites (TFBS). Knowing these TFBS gives a better understanding of how transcriptional regulation works, and therefore the discovery of TFBS is one of the most fundamental problems in molecular biology research [1, 2]. Historically, a wide variety of methods have been applied to this problem, computational methods being currently the prevailing approach. The computational problem consists of discovering motifs by searching for overrepresented (and/or conserved) DNA patterns in sets of functionally related genes, such as genes with similar functional annotation or genes with similar expression patterns. The number of different computational approaches to tackle this problem is constantly growing as computational techniques evolve. One of the most recent techniques, which, to the best of our knowledge, to this date has not yet been applied to motif discovery, is known as topic models [3].
Topic models
Topic models are statistical algorithms, based on natural language processing and machine learning, which try to discover the structure of a set of documents according to the abstract topics contained in them by hierarchical Bayesian analysis [4]. These algorithms allow examining a set of documents and determining the existing topics and their distribution among the documents based on the statistical properties of the words of a specific vocabulary in each one of them. where L Application of topic models to the motif finding problem.
As far as we know, there is no literature about the application of topic models to motif finding algorithms. The first method here proposed tries to fill that gap and prove that topic models are a suitable method to the motif finding problem. In order to do so, it represents genetic sequences as documents and the k-mers contained in them as words, so that the patterns shown among these k-mers would be considered as motifs. Figure 1 shows a graphic representation of a topic model and how our algorithm would adapt to it.
Representation of a topic model adapted to the motif finding problem. Representation of a topic model adapted to the motif finding problem. This figure shows the basic structure of a topic model (in this case, a LDA). The terms specific for the case of the motif finding problem are stated in red under the original ones in blue, showing that the motif finding problem can be represented by a topic model by describing the DNA sequences as documents, the instances of each given motif as words in those documents, and the motifs as clusters of words or topics
The algorithm, as a topic model, would therefore examine a set of sequences to determine the hidden structure of the patterns contained in it. As this is totally consistent with the motif finding problem, it seems likely that the algorithm should be able to correctly discover motifs.
Addition of topic models to a previously developed algorithm (Statistical GA)
Previously to this study of topic models applied to the motif finding problem, we developed another algorithm with the structure of a GA, which used statistical coefficients as a fitness measurement [5]. From this point, we will refer to this algorithm as the Statistical GA algorithm. The Statistical GA algorithm was proven to effectively find overrepresented motifs in sets of sequences. However, it had the main drawback of reporting an excessive number of false positives. Along with the algorithm based entirely on topic models, in this study we also research how the use of topic models can be applied to improve the previously developed algorithm and reduce the number of false positives.
How topic models work
The main problem, from a computational point of view, of topic modeling is to infer a concealed topic structure from the examination of the documents.
A topic is formally defined as a multinomial distribution over a fixed vocabulary. In other words, topic models consider that a document could, conceptually, be generated from a set of topics, each one of them being a set of words related to that topic. So that, to create a document, the words would be selected iteratively from the topics that we desire to appear in it. For example, if we want a document that is two thirds about stem cells and one third about cancer, we would create two topics (stem cells and cancer) as sets of words typically related to them, and then construct the document by selecting two thirds of the words from the stem cells set and one third from the cancer set.
Topic modeling consists of reversing this conceptual approach, considering that the topics of a document (or a set of documents) can be inferred from the proportions of the words contained in them.
The intuition behind this algorithm is that all of the documents in the collection share the same set of topics, but each one of them in a different proportion, which is reflected in the distribution of the different words among them.
The inputs of a topic model are a set of N documents (d 1 , …, d N ) and the number of topics K that are expected to be contained in the documents. For each one of the documents d i to be analyzed, the most basic algorithm would process the words in a two-stage process.
Choose a random distribution of the document over the topics (t 1 , …, t K ).
For each word w j in the document:
Choose a random topic t r from the distribution over topics previously generated.
Once w j is assigned, for each one of the topics t m in the current set of topics, compute the proportion of words in the document d i that are currently assigned to the topic t m , P(t m |d i ), and the proportion of assignments to the topic t m over all of the documents that come from the word w j , P(w j |t m ) and then reassign w j to the topic that gives the best probability P(t m |d i ) * P(w j |t m ).
A stable set of assignments will be reached after repeating the above steps for several iterations.
The benefit of the use of topic models is that they offer an automated solution to the organization and annotation of large text archives. However, this is not their only utility, and they can be applied to many other fields, such as the subject in question here, bioinformatics.
Creating a motif finding algorithm based on topic models from scratch
Algorithm structure
The first problem that arises when adapting a topic model to the motif finding problem comes from the idea that, whereas the words in a text document are clearly separated by spaces, in the case of a genetic sequence a mechanism to select the k-mers that will form the vocabulary must be defined.
A typical topic model, as a first step, usually creates a vocabulary from the words in the documents by discarding meaningless words (in terms of determining a topic), such as "the" or "of" in documents written in English, as well as words that are not repeated frequently, since in both cases they would not help to find the hidden topics and they would instead add noise to the algorithm. Again, this is consistent with a motif finding algorithm, so in this case an initial vocabulary would need to be created similarly, but in this case by selecting k-mers that are overrepresented in the set of sequences.
From this a new problem arises, which is the impossibility to select all of the possible overrepresented patterns in a reduced amount of time. In order to deal with that, a genetic algorithm (GA) [6] structure was chosen as the basis of the algorithm here presented, being the topic model the approach for selecting the best possible solutions in the fitness function.
Algorithm implementation
The method here proposed is a heuristic algorithm, that is, it gives an approximate (not necessarily optimal) solution, and it is also stochastic, so that each time it is run with the same set of sequences it will likely produce different results. It searches only for ungapped motifs, so that patterns which contain gaps might be predicted split into several separate motifs. Also, in contrast with other motif finding algorithms, which usually suppose that there is at least an instance of the motifs in every sequence of the data set, it makes no assumptions about how the motifs are distributed among the sequences.
The algorithm is implemented as a classic GA. In other words, it starts by creating a population of possible solutions (individuals) for our problem and then it iterates over them, keeping the best (fittest) solutions of every iteration, discarding the worst ones, and creating new solutions based on the fittest ones for the following iteration, until an optimum solution is found or a given number of iterations is reached.
Therefore, the only aspects that need to be defined are how the population is represented, how it is evaluated (fitness), how the fittest individuals are selected in every iteration, how new individuals are generated by the surviving ones (crossover, mutation) and when the algorithm will stop iterating and report a final solution (or set of solutions).
Each individual of the population is a set of m k-mers which can be contained in any of the sequences of the data set. The k-mers can be of any length between a minimum and a maximum passed as a parameter. The initialization works as follows:
Given a set of sequences, a minimum k-mer length k min , a maximum k-mer length k max , a minimum number of repetitions for each k-mer in the data set c min , a population size N, and a number of words per individual in the population n. For each one of the N individuals, iterate until an initial set of n k-mers is reached:
Choose a random word length k within the range k min : k max .
Choose a random sequence from the data set.
Choose a random position p in that sequence between 0 and l - k, being l the sequence length.
Select the word w starting in the position p with length k.
Count the number of occurrences c of the word w in the given sequence, allowing for 25% of mismatches.
Shuffle w into w s and count the number of occurrences c s of the word w s in the given sequence, allowing for 25% of mismatches.
If c - c s is greater or equal than c min , add the k-mer to the set for the given individual.
The fitness calculation is the more crucial step in a GA. It is at this moment when the topic models must be applied and provide a way to obtain solutions for the motif finding problem.
The type of topic model chosen was a correlated topic model (CTM) [7], since it takes into consideration the correlation between topics, and, biologically speaking, motifs also usually show correlation, given that transcription factors which have correlated biological functions bind to them. A CTM makes use of a logistic normal distribution, which, through the transformation of a multivariate normal random variable, allows for a general pattern of variability between the components of the distribution [8]. More specifically, the CTM contained in the R package topicmodels [9] was the method used for the construction of the CTM in every iteration.
For each one of the individuals of the population, its set of k-mers, along with the original set of sequences, is fed to a CTM as the vocabulary and the documents respectively. Then the perplexity of the resulting model is measured and returned as the fitness of the given individual.
The perplexity of a probabilistic model is a measure of the accuracy with which its distribution predicts a sample. It is the standard used in natural language processing to evaluate the accuracy of the model. The lower the perplexity, the better the model fits the data. The perplexity is calculated by splitting the dataset into two parts: one for training and one for testing, and then measuring the log-likelihood of the unseen documents. As the perplexity is calculated using the corresponding function provided by Hornik and Grün for their CTM implementation, the mathematical formula for perplexity used in this method follows their same definition [9]:
$$ Perp\left(\omega \right)= exp\left\{-\frac{ \log \left(p\left(\omega \right)\right)}{{\displaystyle {\sum}_{d=1}^D}{\displaystyle {\sum}_{j=1}^V}{n}^{(jd)}}\right\} $$
where n (jd) refers to the frequency with which the jth word appears in the document d.
The algorithm tries to give an optimum set of solutions by minimizing the perplexity. Therefore, for the selection of the fittest candidates, an elitist approach is used. In other words, after all of the fitness measurements have been done for a specific generation of individuals, these are selected in random pairs, in which the fittest individual (lower perplexity) survives and the less fit individual is eliminated from the population. After this stage, N/2 fit individuals remain in the population.
So the next step is generating new individuals by the use of the crossover function to create a new population of N individuals.
The Crossover step is performed after the Evaluation and Selection step to generate new individuals in the population for the next generation.
First, two individuals are randomly selected from the population to act as parents.
The crossover function in this case is a classic one-point crossover in which two children are generated by swapping the data beyond a randomly selected crossover point between both parents. In this case, the crossover point is an index in the array of k-mers of the individuals, which indicates which k-mers to select from each one of the parents (these k-mers are shuffled before this step).
The two newly formed children are added to the population and the process is repeated until the population contains N individuals again.
Mutation happens randomly, according to a parameter that defines the frequency. It is also applied to random individuals. The mutated individual will have a random number of its k-mers slightly shifted from their original position (the position in the sequence randomly increases or decreases by a number no longer than the length of the given k-mer).
Once the GA is terminated, the fittest individuals are sorted by perplexity (from lower to higher) and selected accordingly as solutions depending on a parameter set by the user that defines how many motifs are expected to be found in the data set. For each one of these solutions, the CTM is generated once again and each one of the resulting topics is returned as a motif of the data set.
Improving the statistical GA algorithm by the use of the perplexity measurement
The Statistical GA algorithm [5] works as a GA in which the fitness function takes three steps to discard unfit solutions based on three different coefficients [10, 11], in which the main method to select the final candidates is the Mann-Whitney U-Test [12]. Each candidate is a k-mer of a fixed length defined as a parameter represented as a position in a supersequence, which is a concatenation in a random order of all the genetic sequences received as an input. To simplify the calculations, this supersequence is divided in a set of subsequences so that in each iteration the fitness is calculated for a given subsequence and the candidates which show no overrepresentation in the given segment are swiftly discarded without further computations.
Adding the use of topic models
The main drawback of the Statistical GA algorithm is that it reported a big amount of false positives, and one of the main reasons for this was that it had no way to measure the confidence of the results reported. Thus, it reported at least one motif for every data set, ballasting that way the overall performance of the algorithm. The solution here proposed for that problem consists of taking the final set of instances provided by the algorithm, creating a CTM with them, and measuring the perplexity. Then, only in the cases in which the perplexity is lower than certain threshold the motifs returned by the algorithm are reported (Fig. 2). The impact of this was that now the Statistical GA algorithm only reports motifs for those data sets in which there is a CTM that fits well the solutions found. That way, the algorithm is able to measure the confidence of the solutions obtained by the main GA. The threshold was set at 100 for experimental reasons, given that in the tests performed the motifs reported with a perplexity higher than 100 tended to be false positives.
Statistical GA algorithm workflow after including the use of topic models. This figure describes the updated flow of the Statistical GA algorithm after adding the perplexity measurement for the selection of solutions. There are four steps in this flow: the first one, in which the candidate instances are selected by the original Statistical GA; the second one, in which these instances are clustered attending to their similarity calculated by their hamming distance; the third one, which consists of building the CTM and measuring its perplexity, and the last step, which consists of reporting the motif if the perplexity calculated in the previous step is lower than 100
Several studies [1, 2] concluded that evaluating the performance of a motif finding tool has been proven to be a difficult task, and there is no method to compare tools that can give a definitive conclusion about which one is the best and which one is the worst. Keeping this idea in mind, both of the two methods here presented were tested making use of the assessment proposed by Tompa et al. [1] in their study to evaluate the performance of several motif finding tools by the scores obtained in eight different statistical coefficients. It is worth mentioning that only the accuracy of the tools predicting binding sites is evaluated, and other aspects such as the running time of each method, are not measured. The benchmark provided by the assessment, which is the same one used in this study, is formed by 52 data sets, which belong to four different species (fly, human, mouse and yeast) and also 4 negative controls to sum a total of 56 data sets. These 56 data sets are also divided into three different categories: data sets of Type Real, which correspond to the real promoter sequences that contain the original sites that the different tools will try to locate; data sets of Type Generic, which correspond to promoter sequences generated randomly from the same genome, and data sets of Type Markov, which correspond to synthetic sequences generated by a Markov chain. The original assessment compared the efficiency of 14 different tools (Additional file 1: Table S1) [13–26]. Each one of those tools was allowed to report only one (or none) motif per data set. The format in which this motif was reported was as a list of instances and their corresponding positions in the sequences of the data set. Then, the accuracy of how well these instances match the real instances of the motif is studied both at nucleotide and site level. At site level, a predicted site is considered to match the known site if it overlaps at least one quarter of it. With this information, the following eight statistics are used to measure the accuracy of each one of the methods:
nSn (Sensitivity, nucleotide level):
$$ nSn = \frac{nTP}{nTP+nFN} $$
nPPV (Positive Predicted Value, nucleotide level):
$$ nPPV = \frac{nTP}{nTP+nFP} $$
nSp (Specificity):
$$ nSp = \frac{nTN}{nTN+nFP} $$
nPC (Performance Coefficient, nucleotide level) [27]:
$$ nPC = \frac{nTP}{nTP+nFN+nFP} $$
nCC (Correlation Coefficient) [28]:
$$ nCC = \frac{nTP \times nTN+nFN \times nFP}{\sqrt{\left(nTP+nFN\right)\left(nTN+nFP\right)\left(nTP+nFP\right)\left(nTN+nFN\right)}} $$
sSn (Sensitivity, site level):
$$ sSn = \frac{sTP}{sTP+sFN} $$
sPPV (Positive Predicted Value, site level):
$$ sPPV = \frac{sTP}{sTP+sFP} $$
sASP (Average Site Performance) [28]:
$$ sASP = \frac{sSn+ sPPV}{2} $$
The coefficients starting by n are statistics at nucleotide level, and the coefficients starting by s are statistics at site level. TP, FP, TN and FN refer to the number of true positives, false positives, true negatives and false negatives respectively.
Both of the two methods here described were tested using the methodology presented in this assessment and compared to the 14 methods with which it was originally carried out.
The CTM algorithm was run with the following parameters:
Motif width between 6 and 30
Population size: 50
Number of generations: 90
Number of instances per individual: 1000
Maximum number of solutions: 10
Mutation rate: 0.1
As for the statistical GA algorithm, it was run with the same parameters as in the original study [5]. After adding the perplexity measurement in the post processing stage, a new restriction was included: Only the motifs reported with a perplexity lower than 100 were considered as solutions.
All of the tests were run in a laptop computer with a 2.6 GHz Intel Core i5 processor and an 8 GB 1600 MHz DDR3 memory.
Figure 3 summarizes the average values of the statistics previously defined for each one of the 14 tools originally analyzed in the assessment and for both of our proposed tools. Figure 4 shows the average values grouped by organisms.
Average statistical values for all 56 data sets. This figure shows the average scores obtained by each one of the tools studied for each one of seven different statistics for all the 56 data sets of the benchmark. The Statistical GA method is shown as GA approach, and the CTM method as CTM approach
Average statistical values for each organism. This figure shows the average scores obtained by each one of the tools studied for each one of seven different statistics grouped by the four different species contained in the data sets. The Statistical GA method is shown as GA approach, and the CTM method as CTM approach
To calculate the average values, we followed the same process as in the original assessment. In a first step, the values of nTP, nFP, nFN, nTN, sTP, sFP and sFN obtained for each one of the data sets are summed. Then, each one of these summed values is considered as the given score of a large data set, and the eight statistics are calculated for that large data set , obtaining that way the average scores.
As previously stated, none of the statistics analyzed should ever be taken as an absolute measurement of the quality of the methods. The authors of the assessment [1] themselves indicate several factors that affect the results and might give a wrong impression about the performance of the different algorithms:
This assessment, as any other possible method, can never be considered a standard method to measure the biological significance of the studied tools, since it is still unknown how the subjacent biology works.
As each one of the algorithms was required to predict only one (or none) motif for each data set, there might be an arbitrary component in the candidate selected by each tool.
The assessment requires each tool to report only one (or none) motif for each data set. However, it is known that, especially in the case of the data sets of Type Real, they are likely to contain more than one motif.
Many of the known binding sites are longer than 30 bp. Our tools, as well as most of the others, were run for motifs no longer than 30 bp in the case of the CTM algorithm and 12 bp in the case of the statistical GA algorithm. This affects the performance at nucleotide level even if the performance at site level is high.
The assessment relies on TRANSFAC [29] as its only source of known binding sites. As the information obtained from TRANSFAC is not contrasted with other sources, it might as well contain errors.
The above explained method used to compute the average scores of every tool tends to penalize those tools that make wrong predictions more than those that make no predictions at all, as 0 is the default value for the cases in which no motifs are reported.
As long as all these factors are not forgotten, some important conclusions regarding the performance of the different methods can still be inferred from the use of the benchmark proposed in the assessment.
First of all, the CTM method shows levels in Sensitivity (both at nucleotide level, nSn, and at site level, sSn) only outperformed by our other method, the Statistical GA (Figs. 3 and 4). It also shows a remarkable Average Site Performance (sASP) and, regarding the rest of statistics, even though the numbers obtained are not especially satisfying, they are comparable to most of the other methods.
Thus, we can already reach the conclusion that topic models are a perfectly valid method to design motif finding algorithms.
As for the Statistical GA method, Fig. 5 shows the improvement in all of the average statistics after narrowing down the results reported according to the perplexity shown in the CTM. All of the scores for the different statistics are practically doubled after filtering out the motifs for which the perplexity of the corresponding CTM is higher than 100.
Comparison of statistics for the Statistical GA method before and after filtering by perplexity. Here it is shown the improvement in the average scores of the Statistical GA method for seven different statistics obtained by the addition of the filtering of the results by their perplexity in the corresponding CTM
This tool now clearly outperforms most of the other methods, showing levels of nSn, sSn, and sASP to which any of the other tools can hardly be compared (Figs. 3 and 4). This further proves the usefulness of topic models for motif discovery tools.
Given the nature of both methods, and the high number of true positives shown (especially at site level), it seems clear that both succeed in predicting many of the sites but lack of a mechanism to detect false positives. In other words, as the high scores in Sensitivity and Average Site Performance show, both methods can correctly report most of the known motifs, but they locate too many instances of them in the input sequences, so that the number of false positives reported in the assessment, especially at nucleotide level, appears too large, in spite of the correctness of the consensus or the score matrix given by the algorithms as a result. We therefore believe that the high number of false positives is due, to a large extent, to the nature of the assessment. This drawback was considerably reduced in the new version of the statistical GA algorithm, thanks to the use of the perplexity measurement to avoid predicting wrong motifs, but we believe the number of false positives in the assessment could still be shortened if some sort of method was used in the post processing step to obtain only the correct instances of the known site by the use of a weight matrix based on the consensus sequence, instead of simply reporting all of the candidate instances found as both tools currently do. For the CTM method, a way to filter out the results which are not reported with high confidence is required as well.
The method that gives the best overall statistics after the Statistical GA method is Weeder. As the authors of the assessment clarify [1], one of the main reasons for that is the way in which it was run. The author of the tests decided to pick a cautious mode, that is, to predict a motif only if there is a high confidence of its existence. That explains, therefore, the great improvement in the statistics for our Statistical GA tool, and that it is mostly due to the way to calculate the average statistics proposed in the assessment.
As for the running time, as stated before, it is not an object of the assessment, which is focused on the accuracy of the sites predicted. However, it is worth mentioning that the CTM method slows down considerably when the number of input sequences is bigger than three. Therefore, some solution for this problem, such as dividing the data sets into subgroups of three or fewer sequences, will be required. The Statistical GA method, on the other hand, is able to report the results of data sets of any size in a matter of minutes.
DNA motif finding still remains as one of the most challenging tasks for researchers, and so it is the task of comparing the performance of the different existing tools, given that each one of them has been designed using very heterogeneous algorithms and models, and that there is still little known about the subjacent biology. Therefore, we must insist on the fact that nowadays it is impossible to define a standard quality measurement to evaluate the performance of the different tools.
Most of the studies on the performance of motif finding algorithms [2] conclude
that the best option for biologists to try to predict sites in a set of sequences is never to rely on a single tool, but better to use a few complementary tools and combine the top predicted results of each one of them.
In line with this, we believe that the methods here described, despite their drawbacks, can perfectly be part of those tools that biologists use in combination with others to predict de novo binding sites in sets of biological sequences. Especially in the case of the CTM method, there are still many improvements to be done. However, given the results, it can already be used as a ground for future tools based on the use of topic models as a reliable method for motif finding.
ASP:
Average site performance
CTM:
Correlated topic model
False negative
False positive
GA:
TFBS:
Transcription factor binding site
TN:
True negative
TP:
True positive
Tompa M, Li N, Bailey TL, Church GM, De Moor B, Eskin E, Favorov AV, Frith MC, Fu Y, Kent WJ, et al. Assessing computational tools for the discovery of transcription factor binding sites. Nat Biotechnol. 2005;23(1):137–47.
Das MK, Dai HK. A survey of DNA motif finding algorithms. BMC Bioinf. 2007;8(7):1.
Blei DM. Probabilistic topic models. Commun ACM. 2012;55(4):77–84.
Blei DM, Ng AY, Jordan MI. Latent dirichlet allocation. J Mach Learn Res. 2003;3(Jan):993–1022.
Gutierrez JB, Frith M, Nakai K. A Genetic Algorithm for Motif Finding Based on Statistical Significance. In: International Conference on Bioinformatics and Biomedical Engineering. Granada: Springer International Publishing; 2015. p. 438–49.
Mitchell M. An introduction to genetic algorithms. Cambridge, MA: MIT Press; 1996.
Blei D, Lafferty J. Correlated topic models. Adv Neural Inf Proces Syst. 2006;18:147.
Aitchison J. The statistical analysis of compositional data. J R Stat Soc B Methodol. 1982;44(2):139–77.
Hornik K, Grün B. topicmodels: An R package for fitting topic models. J Stat Softw. 2011;40(13):1–30.
Abnizova I, te Boekhorst R, Walter K, Gilks WR. Some statistical properties of regulatory DNA sequences, and their use in predicting regulatory regions in the Drosophila genome: the fluffy-tail test. BMC Bioinf. 2005;6(1):109.
Shu JJ, Li Y. A statistical thin-tail test of predicting regulatory regions in the Drosophila genome. Theor Biol Med Model. 2013;10(1):11.
Mann HB, Whitney DR. On a test of whether one of two random variables is stochastically larger than the other. Ann Math Stat. 1947;50–60.
Favorov AV, Gelfand MS, Gerasimova AV, Mironov AA, Makeev VJ. Gibbs sampler for identification of symmetrically structured, spaced DNA motifs with improved estimation of the signal length and its validation on the ArcA binding sites. Proc of BGRS. 2004;2004:269–72.
Pavesi G, Mereghetti P, Mauri G, Pesole G. Weeder Web: discovery of transcription factor binding sites in a set of sequences from co-regulated genes. Nucleic Acids Res. 2004;32:W199–203.
Sinha S, Tompa M. YMF: a program for discovery of novel transcription factor binding sites by statistical overrepresentation. Nucleic Acids Res. 2003;31:3586–8.
Pevzner PA, Sze SH. Combinatorial approaches to finding subtle signals in DNA sequences. In: ISMB. 2000. p. 269–78.
Burset M, Guigo R. Evaluation of gene structure prediction programs. Genomics. 1996;34(3):353–67.
Wingender E, Dietze P, Karas H, Knüppel R. TRANSFAC: a Database on transcription factors and their DNA binding sites. Nucleic Acids Res. 1996;24:238–41.
Hughes JD, Estep PW, Tavazoie S, Church GM. Computational identification of cis-regulatory elements associated with functionally coherent groups of genes in Saccharomyces cerevisiae. J Mol Biol. 2000;296:1205–14.
Workman CT, Stormo GD. ANN-Spec: a method for discovering transcription factor binding sites with improved specificity. In: Pac Symp Biocomput. 2000. p. 467–78.
Hertz GZ, Stormo GD. Identifying DNA and protein patterns with statistically significant alignments of multiple sequences. Bioinformatics. 1999;15:563–77.
Frith MC, Hansen U, Spouge JL, Weng Z. Finding functional sequence elements by multiple local alignment. Nucleic Acids Res. 2004;32:189–200.
Ao W, Gaudet J, Kent WJ, Muttumu S, Mango SE. Environmentally induced foregut remodeling by PHA-4/FoxA and DAF-12/NHR. Science. 2004;305:1743–6.
Bailey TL, Elkan C. The value of prior knowledge in discovering motifs with MEME. In: Ismb. 1995. p. 21–9.
Eskin E, Pevzner P. Finding composite regulatory patterns in DNA sequences. Bioinformatics. 2002;18 suppl 1:S354–63.
Thijs G, Lescot M, Marchal K, Rombauts S, De Moor B, Rouze P, Moreau Y. A higher-order background model improves the detection of promoter regulatory elements by Gibbs sampling. Bioinformatics. 2001;17:1113–22.
van Helden J, Andre B, Collado-Vides J. Extracting regulatory sites from the upstream region of yeast genes by computational analysis of oligonucleotide frequencies. J Mol Biol. 1998;281:827–42.
van Helden J, Rios AF, Collado-Vides J. Discovering regulatory elements in noncoding sequences by analysis of spaced dyads. Nucleic Acids Res. 2000;28:1808–18.
Régnier M, Denise A. Rare events and conditional events on random strings. Discrete Math Theor Comput Sci. 2004;6:191–214.
We would like to thank Martin Frith of the Computational Biology Research Center at the AIST Tokyo Waterfront Bio-IT Research Building for his contribution to the original development of the Statistical GA method used in this research. We also thank the reviewers of this manuscript for their comments, which certainly helped to improve it and will be useful for future developments.
This article has been published as part of BMC Bioinformatics Volume 17 Supplement 19, 2016. 15th International Conference On Bioinformatics (INCOB 2016): bioinformatics. The full contents of the supplement are available online https://bmcbioinformatics.biomedcentral.com/articles/supplements/volume-17-supplement-19.
Publication of this article was funded by the corresponding author's institution.
The complete code of both of the methods presented in this article are available at https://github.com/basha-u-tokyo/motif-finding-topic-models.
The datasets supporting the conclusions of this article are available in the University of Washington repository, provided by Tompa et al. [1] in their study to compare the performance of different motif finding methods using several statistical coefficients, http://bio.cs.washington.edu/assessment/.
JBG conceived and developed the methodology and drafted the manuscript, and KN supervised the whole research. All authors read and approved the final manuscript.
Department of Computational Biology and Medical Sciences, Graduate School of Frontier Sciences, The University of Tokyo, 277-8561, Chiba, Japan
Josep Basha Gutierrez & Kenta Nakai
Human Genome Center, The Institute of Medical Science, The University of Tokyo, 4-6-1 Shirokane-dai, Minato-ku, 108-8639, Tokyo, Japan
Josep Basha Gutierrez
Kenta Nakai
Correspondence to Kenta Nakai.
This table shows the tools that were studied in the original assessment by Tompa et al. [1] and the two methods presented in this study, each one with a short description of their underlying methodologies. (DOCX 15 kb)
Basha Gutierrez, J., Nakai, K. A study on the application of topic models to motif finding algorithms. BMC Bioinformatics 17, 502 (2016). https://doi.org/10.1186/s12859-016-1364-3
Nucleotide Level
Motif Finding
|
CommonCrawl
|
communications physics
Reconfigurable flows and defect landscape of confined active nematics
Topological chaos in active nematics
Amanda J. Tan, Eric Roberts, … Linda S. Hirst
Polarity and chirality control of an active fluid by passive nematic defects
Alfredo Sciortino, Lukas J. Neumann, … Andreas R. Bausch
Continuous generation of topological defects in a passively driven nematic liquid crystal
Maruša Mur, Žiga Kos, … Igor Muševič
Active nematics
Amin Doostmohammadi, Jordi Ignés-Mullol, … Francesc Sagués
Living proof of effective defects
M.-A. Fardin & B. Ladoux
Tunable colloid trajectories in nematic liquid crystals near wavy walls
Yimin Luo, Daniel A. Beller, … Kathleen J. Stebe
Motile dislocations knead odd crystals into whorls
Ephraim S. Bililign, Florencio Balboa Usabiaga, … William T. M. Irvine
Electrically tunable collective motion of dissipative solitons in chiral nematic films
Yuan Shen & Ingo Dierking
Defect dynamics in active smectics induced by confining geometry and topology
Zhi-Feng Huang, Hartmut Löwen & Axel Voigt
Jérôme Hardoüin1,2 na1,
Rian Hughes3 na1,
Amin Doostmohammadi ORCID: orcid.org/0000-0002-1116-42683,
Justine Laurent4,5,
Teresa Lopez-Leon6,
Julia M. Yeomans3,
Jordi Ignés-Mullol ORCID: orcid.org/0000-0001-7963-37991,2 &
Francesc Sagués1,2
Communications Physics volume 2, Article number: 121 (2019) Cite this article
Condensed-matter physics
Soft materials
The physics of active liquid crystals is mostly governed by the interplay between elastic forces that align their constituents, and active stresses that destabilize the order with constant nucleation of topological defects and chaotic flows. The average distance between defects, also called active length scale, depends on the competition between these forces. Here, in experiments with the microtubule/kinesin active nematic system, we show that the intrinsic active length scale loses its relevance under strong lateral confinement. Transitions are observed from chaotic to vortex lattices and defect-free unidirectional flows. Defects, which determine the active flow behaviour, are created and annihilated on the channel walls rather than in the bulk, and acquire a strong orientational order in narrow channels. Their nucleation is governed by an instability whose wavelength is effectively screened by the channel width. These results are recovered in simulations, and the comparison highlights the role of boundary conditions.
Active matter refers to systems composed of self-driven units, such as tissues, bacterial suspensions, or mixtures of biofilaments and motor proteins, that organise their textures and flows autonomously by consuming either stored or ambient free energy1,2. This distinctive hallmark sometimes conceals another significant, and often unappreciated, feature of active systems: their capability to adapt to the environments where they reside. For example, human cancer cells switch between distinct invasion modes when they encounter constrictions in the crowded environment of stroma3, and the growth of bacterial biofilms can be directed by their surroundings4. Moreover, geometrical confinement tends to control active flows, replacing the bulk chaotic flow state often termed active turbulence, by more regular flow configurations. Understanding the subtleties of how this occurs will have relevance to possible future applications of active materials in microfluidics and self-assembly, and in assessing the relevance of the concepts of active matter in the description of biological systems.
Recent contributions dealing with confinement in bacterial suspensions and cell layers have demonstrated a rich range of behaviour. Competition between wall orientation, hydrodynamic interactions, topology and activity lead to a wide variety of flow patterns: spiral vortices5,6, synchronised vortex lattices7, unidirectional flows8,9,10, shear flows11 and freezing12,13. Work on confining active mixtures of microtubules and motor proteins to circular domains14,15,16,17, in vesicles18 or droplets19,20 and to more complex geometries such as tori21,22, have already probed the specific effects of interfacial viscosity, curvature and 3D confinement. These experimental results have prompted parallel simulations23,24,25,26,27,28.
Here we concentrate on active nematics confined to two-dimensional channels. The system is composed of self-propelling elongated units formed by bundled microtubules powered by adenosine triphosphate (ATP)-consuming kinesin29. Early theoretical work predicted that laterally-confined active nematics undergo an instability to spontaneous laminar flow when the channel width reaches a typical length scale that depends on the strength of the activity30. This prediction has been recently confirmed in experiments with spindle-shaped cells11. On the other hand, simulations have predicted that, at higher activities or in wider channels, a structured 'dancing' state can be stable in active nematics31. Our aim here is to assess, in a well-controlled and tunable experimental system, and with the support of numerical simulations, the role of confinement in the patterns and dynamics of an active nematic. In particular, we explore the emergence of a new length scale different from the active length that characterizes the unconfined systems. We find a rich dynamical behaviour, depending on the channel width. More specifically, we uncover a defect-free regime of shear flow in narrow channels. This regime is unstable with respect to the nucleation of short-lived defects at the walls. By increasing the channel width, defect lifetime increases, developing a spatio-temporal organization that corresponds to the predicted state of dancing vortical flows31, before full disorganization into the active turbulence regime for still wider channels, as is typical of the unconfined active nematic. We stress the close interplay between the velocity field and the defect dynamics, and highlight the emergence of a new length scale that, contrary to the classical active length scale, does not depend on the activity level but merely on geometrical parameters.
The active system we use comprises microtubules powered by ATP-consuming, two headed kinesin molecular motors29. Addition of the depleting agent polyethylene glycol (PEG) concentrates the microtubules into bundles, hundreds of microns long. Within each bundle the kinesin motors bridge neighbouring microtubules and walk towards their positive ends leading to internal sliding of filaments of opposite polarity. The active nematic was prepared using an open-cell design32, in which 2 μL of the active aqueous microtubule-mixture was placed inside a custom-made pool of 5 mm diameter and was covered with 60 μL of 100 cSt silicon oil (see Methods). Within 30 min, an active nematic layer extends over the whole surface of the pool. Driven by the motors, the microtubule bundles at the interface continuously extend and buckle. This gives rise to a dynamical steady state, termed active turbulence, characterised by high vorticity and by the creation, translation and destruction of topological defects.
The layer of active nematic is confined in rectangular enclosures by means of micro-printed polymer grids of 100 μm thickness that are placed in contact with the oil/aqueous interface (see Fig. 1a). The active nematic is in contact with the active bulk solution underneath, thus ensuring activity and material parameters are equal in all channels. A detailed sketch of the experimental protocol is available in Supplementary Fig. 1.
Flow states. a Top view of the experimental setup including the relevant spatial dimensions. A polymer plate with rectangular openings is placed, by means of a micropositioner, at the interface between the active fluid and silicon oil, thus constraining the existing active nematic. b–d Confocal fluorescence micrographs of an active nematic interface confined in channels of different widths. Scale bar: 100 μm. e–g Corresponding simulations of the experimental system. Streaking patterns follow the director field tangents. They are produced using a Line Integral Convolution of the director field56 with Paraview software.The colour map corresponds to the computed nematic order parameter, q. b, e Correspond to strong confinement, where the filaments are organized into a unstable shear alignment regimee. c, f Illustrate the effects of moderate confinement, forcing a new dynamical regime of the defects. d, g Correspond to the active turbulence regime
To model the dynamics of the confined microtubule-motor system we use a continuum description of a two-dimensional, active gel1,2,33,34. The fields that describe the system are the total density ρ, the velocity u, and the nematic tensor Q = 2q(nn − I/2), that describes both the orientation (n) and the magnitude (q) of alignment of the nematogens.
The nematic tensor is evolved according to the Beris–Edwards equation35
$$\left( {\partial _{\mathrm{t}} + {\boldsymbol{u}} \cdot {\bf{\nabla }}} \right){\mathbf{Q}} - {\mathbf{S}} = {\mathrm{\Gamma }}_{\mathrm{{Q}}}{\mathbf{H}},$$
where S = ξE − (Ω ⋅ Q − Q ⋅ Ω) is a generalised advection term, characterising the response of the nematic tensor to velocity gradients. Here, E = (∇u + ∇uT)/2 is the strain rate tensor, Ω = (∇uT − ∇u)/2 the vorticity tensor, and ξ is the alignment parameter representing the collective response of the microtubules to velocity gradients. ΓQ is a rotational diffusivity and the molecular field \({\mathbf{H}} = -\frac{\delta {\cal{F}}}{\delta {\mathbf{Q}}}+\frac{\mathbf{I}}{2}{\rm{Tr}} \left(\frac{\delta {\cal{F}}}{\delta {\mathbf{Q}}}\right)\), models the relaxation of the orientational order to minimise a free energy \({\cal{F}}\).
The free energy includes two terms. The first is an elastic free energy density, \(\frac{1}{2}K({\bf{\nabla }}{\mathbf{Q}})^2\), which penalises any deformations in the orientation field of the nematogens and where we assume a single elastic constant K. We note that the free energy functional does not include any Landau-de Gennes bulk free energy terms: all the ordering in the simulations arises from the activity36. This is motivated by the fact that there is no equilibrium nematic order in the experimental system without ATP (i.e., in the absence of active driving). The second contribution to the free energy is a surface anchoring, \(\frac{1}{2}W{\mathrm{Tr}}({\mathbf{Q}} - {\mathbf{Q}}_{\mathrm{D}})^2\). To correspond to the experiments QD is chosen so that the director prefers to align parallel to the boundary walls. The strength of anchoring at the boundaries, W, is set to values corresponding to weak anchoring so that the nematogens can re-orientate at the walls to allow defects to form there.
The total density ρ satisfies the continuity equation and the velocity u evolves according to
$$\rho (\partial _{\mathrm{{t}}} + {\boldsymbol{u}} \cdot {\bf{\nabla }}){\boldsymbol{u}} = {\bf{\nabla }} \cdot {\bf{{\Pi}}},$$
where Π is the stress tensor. The stress contributions comprise the active stress Πactive = −ζQ where ζ is the activity coefficient, viscous stress Πviscous = 2ηE, where η is the viscosity, and the elastic stresses \({\bf{\Pi}}^{\mathrm{elastic}} = - P{\mathbf{I}} - 2\xi q{\mathbf{H}} + {\mathbf{Q}}\cdot {\mathbf{H}} - {\mathbf{H}}\cdot {\mathbf{Q}} - {\bf{\nabla }}{\mathbf{Q}}\frac{{\delta {\cal{F}}}}{{\delta {\bf{\nabla }}{\mathbf{Q}}}}\), where \(P = p - \frac{K}{2}({\bf{\nabla }}{\mathbf{Q}})^2\) is the modified pressure. Equations (1) and (2) were solved numerically using a hybrid lattice–Boltzmann method37,38. In the experiments the microtubules slide over the walls and therefore free-slip boundary condition were imposed on the velocity field. See Methods for simulation parameters.
Flow states
The main experimental control parameters of our system are the channel width, w, and the concentration of ATP, which determines the activity. In this section, we describe how both the defect landscape and flow patterns evolve as w is increased based on experiments (Fig. 1b–d) and simulations (Fig. 1e–g). We identify two well-defined regimes: a shear flow regime, observed for w < 80 μm, which is transiently defect-free (Fig. 1b, e, and Supplementary Movies 1–3), and the dancing defects regime, for w > 90 μm (Fig. 1c, f, and Supplementary Movies 4 and 5). The transition between these two regimes is not sharp. For values of w in the range 80−90 μm, the direction of the shear is not uniform along the channel, but rather composed of patchy domains where shear flow spontaneously arises with a random direction, and is then quickly disrupted by instabilities (see Supplementary Movies 6 and 7). Finally, the unconfined active nematics is obtained for w > 120 μm (Fig. 1d, g).
We find that the relevant parameter setting the dynamic state in the system is λ/w, where λ is the mean defect separation (Fig. 2). The latter is defined as λ = (Lw/N)1/2, where L is the length of a given channel, w its width, and N the number of defects averaged in time. For unconfined active nematics, this length scale coincides with \(l_{\mathrm{a}} = \sqrt {K/\zeta }\), often referred to as the active length-scale, which determines the vortex size distribution and corresponds to the mean defect separation14,38,39. However, we find that, in confined active nematics, λ is no longer equal to la. Instead, it significantly decreases with the channel width (see Supplementary Fig. 2).
Defect Spacing. Experimental mean defect spacing, λ, rescaled by the channel width, w, as a function of w. Different colours correspond to different activities. The error bars correspond to the standard deviation for measurements at 10 random times for each experiment. The non-scaled data is displayed in Supplementary Fig. 2. The dotted line corresponds to a fit with w−1/2. The continuous lines correspond to λ/w = la/w for each experiment, with la being the active length scale corresponding to the value of λ in the unconfined case
In the following, the flow dynamics and the defect landscape are analysed, combining experiments and simulations, for the simple shear (Fig. 3a), flow switching (Fig. 3b), and defect dancing (Fig. 3c) regimes.
Flow patterns and periodic instabilities in the shear state. a–d Each panel is composed of (top) a snapshot of a typical confocal fluorescence image of the active nematic, with defect locations overlaid, (center) experimental Particle Image Velocimetry measurement of the flow patterns, coloured by the normalized vorticity, with velocity streamlines overlaid and (bottom) numerical simulations coloured by the normalized vorticity. Lines in the simulations correspond to the director field. a Shear flow state, λ/w ~ 1, in the shear phase. b Shear flow state, λ/w ~ 1, in the instability phase. c Dancing state, λ/w ~ 0.7. Scale bars: 50 μm. d, e Experimental kymographs for Vx averaged along the channel and Vy averaged across the channel are shown in d while the corresponding simulations appear in e. The colour encodes the normalized velocity components. One instance of stable shear flow regime and one instance of transversal instability are sketched
Defect-free shear flow disrupted by instabilities
The shear flow regime is a defect-free state that appears for λ/w > 1 (Fig. 2), experimentally realized here for w < 80 μm. The active material is primarily aligned parallel to the walls over distances that can persist along the whole channel as shown in Fig. 3a. A global shear deformation is observed, with flows along the channel (Supplementary Fig. 3). The maximum velocities are measured at the walls, with opposite signs, and the velocity perpendicular to the walls is negligible. The shear rate, characterised by the slope of the velocity profile, is approximately constant over a relatively large range of channel widths. This is as expected because the activity, and hence the energy input per unit area, which must be balanced by the viscous dissipation due to the shear, is the same for all channels.
This aligned, defect-free configuration shares visual similarities with the banding patterns that emerge in simulations of dry systems of self-propelled particles with nematic alignment40,41. A fundamental difference with our system, which is continuous rather than composed of discrete particles, is the fact that, in our case, nematic order in intrinsic, and flow alignment is the result of hydrodynamic and elastic interactions with the boundaries. In contrast, in the mentioned dry systems, nematic order is restricted to bands that arise from giant density fluctuations allowed by the compressible nature of the particle ensembles.
Extensile active nematics, such as the one that we use here, are intrinsically unstable to bend deformations30,42,43,44 in aligned conformations. As a consequence, the sheared state eventually leads to local bend instabilities as shown in Fig. 3b. As a result, the velocity field repeatedly switches between two different states: longitudinal shear flow and a transversal instability regime (see Supplementary Movies 1 and 2). The dynamics of the switching behaviour and the coexistence of the shear and the instability states can be best illustrated in a space-time diagram of the averaged velocity components in the channel, as shown in Fig. 3d, e. In the shear state, the velocity perpendicular to the boundaries and averaged across the channel width, 〈Vy(x)〉y, vanishes, while the velocity component parallel to the channel walls and averaged along the channel length, 〈Vx(y)〉x, is maximum at the walls. Once bend instabilities are triggered, defect pairs form and +1/2 defects propagate across the channel and dismantle the shear state, which leads to the emergence of non-zero values of 〈Vy(x)〉y with a well-defined length scale along the channel. The defects eventually reach the opposing wall and annihilate, such that the shear state is reestablished. As is apparent from Fig. 3d, e, over time the active system alternates between the two regimes. Simulations allowed to test channel widths well below the experimental capabilities, allowing to explore the λ ≫ w regime. For these conditions, we observed stable shear flow, as any pairs of defects that were generated at the channel walls immediately self-annihilated (see Supplementary Movie 8).
The instability takes the form of a sinusoidal deformation of the aligned nematic field, with a well-defined length-scale along the channel. As the perturbation progresses, defects are rapidly nucleated from the walls, at regularly-spaced positions coinciding with the maxima of the sinusoidal perturbation. We measured the wavelength of the instability and found that it scales with the channel width (Fig. 4a–c). This is strong evidence that the hydrodynamics is screened, and that the channel width is important in controlling the flows.
Instability wavelength. a Measurement of the instability wavelength λi vs w for different values of activity, controlled by the adenosine triphosphate concentration, whose values are displayed in the legend. b Fluorescent micrographs displaying the measurement of the instability wavelength λi at the onset of defect nucleation in the shear state for three different channels widths w. λi is taken as the distance between two neighbouring negative defects along a given wall, as indicated. Scale bar: 50 μm. c λi vs w in simulations, for different values of activity as indicated. The unit of the axes is the number of lattice sites (l.s.). In a error bars correspond to the standard deviation of the measurements for about 10 independent instability onsets per channel. In c error bars correspond to the standard deviation of the measurements
Upon their nucleation at the boundaries, the orientation of the +1/2 defects is strongly anisotropic (see Fig. 5a). They preferentially align perpendicular to the walls and, due to their active self-propulsion25,29,45, they move away from the walls into the bulk. On the contrary, because of their three-fold symmetric configuration, −1/2 defects have no self-propulsion and remain in the vicinity of the walls31. Eventually, the +1/2 defects reach the opposite wall and annihilate with negative defects residing close to it. In this way, the defect-free phase is periodically restored (see Supplementary Movie 3). Remarkably, even though no chirality is observed in the sheared state, as we repeat the experiments, the handedness of the shear flow initially selected is preserved through successive instability cycles. This memory can be explained by observing that the instability is triggered locally and that it is entrained by the neighbouring sheared regions.
Defect nucleation. a Statistical distribution of positive defect orientations. The angle ψ corresponds to the orientation of the defect with respect to the wall. ψ = π/2 (resp. ψ = 3π/2) corresponds to a defect perpendicularly colliding with (resp. moving away from) the wall. The red distribution is the result for defects close to the wall, while the yellow distribution refers to bulk defects. b Experimental (left) and simulated (right) probability distributions of defect position across a channel. Green (blue) distribution refers to positive (negative) defects
In a recent paper Opathalage et al.17 have reported similar defect nucleation at the boundaries for a microtubule/kinesin mixture in circular confinement with no-slip boundary conditions and planar anchoring. They attribute the rate of defect formation to a combination of the build-up of microtubule density at the boundary increasing local active stresses and a change in the azimuthal force as circular flows wind the microtubules around the confining disk. In our experiments and simulations variation of microtubule density across the channel is small (except in defect cores) and the instability period is controlled by the time taken to realign the microtubules parallel to the walls after each instability. This realignement period not only decreases significantly for increasing channel width, but also for increasing activity as shown in Supplementary Fig. 4.
The dancing state: a one-dimensional line of flow vortices
Increasing the channel width to values between 90 and 120 μm, one-dimensional arrays of vortices are observed as shown in Fig. 3c. A close look at the defect distribution reveals that the transition to the flow state with organised vorticity arrays corresponds to the point when the channel can accommodate more than one defect in its cross-section, i.e., λ/w < 1 (Fig. 2). One could expect that this criterion is reached when the channel width becomes comparable to the active length-scale. However as shown in Fig. 2, la/w is still much larger than 1 in the range of the dancing state, i.e., λ ≪ la. Furthermore, contrary to the la/w curves, λ/w does not seem to depend on activity, supporting the idea that λ is indeed a pure geometrical feature. Dynamically, the system behaves as if two distinct populations of positive defects are travelling along the channel in opposite directions, passing around each other in a sinusoidal-like motion. A similar state had been predicted by simulations and was referred to as the dancing state31. However, contrary to the published simulations, the dancing state observed in our experiments is quite fragile, and vortex lattices are always transient and localised in space. Defects may annihilate with their negative counterparts, or even switch their direction of motion, thus perturbing the flow pattern (see Supplementary Movies 4 and 5). The difference between the spatial organisation of oppositely charged defects in the confined active nematic is manifest in their arrangement across the width of the channel (Fig. 5b and Supplementary Fig. 5a–c). The distribution of the +1/2 defects has a single peak at the centre of the channel (Fig. 5b, green). On the other hand, the −1/2 defect distribution has two peaks, one at each of the boundary walls (Fig. 5b, blue) and the profiles do not rescale with the channel width. Instead, the wall-peak distance is approximately constant at a separation ~18 μm from the wall, as shown in Supplementary Fig. 5d. This can be attributed to −1/2 defects having no self-propulsion and thus interacting elastically with the channel walls46. The distance of the −1/2 defect from the walls is therefore expected to be controlled by the intrinsic anchoring penetration length of the nematic ln = K/W, which is set by the competition between the orientational elastic constant K and the strength of the anchoring at the wall W, and is independent of the channel width and activity of the particles. As expected, for wider channels, w > 120 μm, the difference between the +1/2 and −1/2 defect distributions diminishes as active turbulence is established (see Supplementary Fig. 5c). We observed a similar behaviour in the simulations, but the −1/2 defects were more strongly localised near the walls, and the +1/2 defects consequently tended to lie towards the centre of the channel (Fig. 5b). Simulations have also allowed us to pinpoint the necessary boundary conditions at the channel walls to trigger defect nucleation. We find that such a localised defect formation at the walls is obtained only for free-slip boundary conditions for the velocity, and weak planar anchoring boundary conditions for the nematic director field. This is because the free-slip velocity, together with the parallel anchoring of the director, allows for strong tangential active flows, and hence strong tangential nematic order, to develop along the boundaries. This results in bend instabilities that grow perpendicular to the walls with the weak strength of the anchoring allowing the director to deviate from a planar configuration at the positions where the bend instability is developed. Our previous simulations that assumed no-slip velocity and strong alignment conditions on the confinement did not observe defect nucleation at the boundaries31,47, and showed insensitivity of the active nematic patterns to the boundary conditions47. This is because the strong anchoring used in these works prevented defects forming at the walls.
It is, however, interesting to note that a recent computational study, based on a kinetic approach, has reported a special case of defect nucleation at the boundaries of an active nematic confined within a circular geometry with no-slip velocity boundary condition and free anchoring48. However, in that work the wall-bound defect nucleation was observed only for confining disks of small sizes and had a very different dynamics than that reported here: the +1 defect imposed by the circular geometry was first dissociated into two +1/2 defects and then, for sufficiently small confinement one of the +1/2 defects kept moving into and out of the boundary. This is in contrast to our results that show regularly-spaced defect pair nucleation sites at the boundaries for a range of channel widths. Moreover, we find that the corresponding defect spacing is governed by an instability wavelength that is no longer given by the conventional active length scale.
We have presented experimental results, supported by continuum simulations, investigating the flow and defect configurations of an active nematic confined to rectangular channels of varying width. Our experiments have identified a new dynamical state, where well-defined shear flow alternates in a regular way with bursts of instability characterised by +1/2 topological defects moving across the channel. We have also shown that, for wider channels, it is possible to identify the dancing state31, although the particular boundary conditions considered in the present work make it less stable.
Our work highlights the importance of topological defects in controlling the confined flows. Because the microtubules have weak planar anchoring and can freely slide along the channel walls, pairs of ±1/2 defects form at the walls of the channel. The +1/2 defects are self-propelled and move away from the walls whereas the −1/2 defects remain close to the boundaries. The distance to the boundaries is set by the anchoring penetration length. In bulk active nematics the defect spacing is set by the active length scale and, although there is some evidence of long-range ordering49,50,51, defect motion is primarily chaotic. In confinement, however, the defect spacing and the wavelength of the instability are set by the channel width and the defect trajectories are more structured.
Together, experiments and simulations demonstrate a surprisingly rich topological defect dynamics in active nematics under channel confinement, and a sensitive dependence on both channel width and boundary conditions. Therefore, confinement provides a way of controlling active turbulence and defect trajectories, a pre-requisite for using active systems in microfluidic devices.
Complementary information about the preparation of active samples and grid manufacturing is given in Supplementary information, together with additional comments on image acquisition and data analysis. Details pertaining to the simulations including a table with all parameter values and a discussion on boundary conditions are also provided.
Protein preparation
Microtubules (MTs) were polymerized from heterodimeric (ζ, β)-tubulin from bovine brain [a gift from Z. Dogic's group at Brandeis University (Waltham, MA)], incubated at 37 °C for 30 min in aqueous M2B buffer (80 mM Pipes, 1 mM EGTA, 2 mM MgCl2) prepared with Milli-Q water. The mixture was supplemented with the reducing agent dithiothrethiol (DTT) (Sigma; 43815) and with guanosine-5-[(ζ, β)-methyleno]triphosphate (GMPCPP) (Jena Biosciences; NU-405), a slowly hydrolysable analogue of the biological nucleotide guanosine-5′–triphosphate (GTP) that completely suppresses the dynamic instability of the polymerized tubulin. GMPCPP enhances spontaneous nucleation of MTs, obtaining high-density suspensions of short MTs (1–2 μm). For fluorescence microscopy, 3% of the tubulin was labelled with Alexa 647. Drosophila melanogaster heavy-chain kinesin-1 K401-BCCP-6His (truncated at residue 401, fused to biotin carboxyl carrier protein (BCCP), and labelled with six histidine tags) was expressed in Escherichia coli using the plasmid WC2 from the Gelles Laboratory (Brandeis University) and purified with a nickel column. After dialysis against 500 mM imidazole aqueous buffer, kinesin concentration was estimated by means of absorption spectroscopy. The protein was stored in a 60% (wt.vol−1) aqueous sucrose solution at −80 °C for future use.
Images were acquired using a laser scanning confocal microscope Leica TCS SP2 AOBS with a 10× objective at typical frame rates of 1 image per second. For each experiment and at each time, a frame is acquired in both fluorescence and reflection mode. Fluorescence is used to visualize the nematic field, and reflection images are used for Particle Image Velocimetry (PIV) measurements as explained in the data analysis section.
Grid manufacturing
The grids are printed using a two-photon polymerization printer, a Nanoscribe GT Photonic Professional device, with a negative-tone photoresist IP-S (Nanoscribe GmbH, Germany) and a ×25 objective. The grids were directly printed on silicon substrates without any preparation to avoid adhesion of the resist to the substrate (plasma cleaning of the substrate, for example, would increase the adhesion). After developing 30 min in propylene glycol monomethyl ether acetate (PGMEA 99,5%, Sigma Aldrich) and 5 min in isopropanol (Technical, VWR), a batch polymerization is performed with UV-exposure (5 min at 80% of light power). After printing onto a silicon wafer, the grids are bound to a vertical glass capillary with a UV-curable glue. The capillary is then delicately manipulated to detach the grids from the printing support. The grids are washed in three steps (iso-propanol, DI water, ethanol) and dried with a nitrogen stream before each experiment. The thickness of the grids is 100 μm, to ensure good mechanical resistance. We have used grids with rectangular openings 1.5 mm long and widths ranging from 30 to 300 μm. Each grid contains different channel widths so that simultaneous experiments can be performed with the same active nematic preparation, thus ensuring that material parameters remain unchanged when comparing different confinement conditions. The micropatterning has a resolution of around 50 nm, much higher than the needs of the experiment. The boundary conditions also appear to be very well defined. We never observe microtubules sticking to the wall. As shown by the flow profiles measured in the shear state, the shear profile extends up to the wall, attesting for a free slip boundary condition. The thickness of the active nematic layer is of the order of a few microns, much thinner than the grids. When the grid comes to the interface, we observe that the bound interface tends to lie somewhere in between the top and bottom of the grid, as shown in Fig. 1c. When the grid impacts the interface, uncontrolled flows may destabilize the active nematic layer. These flows mostly depend on the parallelism upon impact, but also on the wetting properties of the grid. The parallelism is relatively easy to control with micro-screw adjustment. On the other hand, the wetting properties of the grid are still under scrutiny. In our experience, the surface of the grids deteriorates after the first use, therefore the samples are considered single-use.
Active gel preparation
Biotinylated kinesin motor protein and tetrameric streptavidin (Invitrogen; 43-4301) aqueous suspensions were incubated on ice for 30 min at the specific stoichiometric ratio 2:1 to obtain kinesin–streptavidin motor clusters. MTs were mixed with the motor clusters that acted as cross-linkers, and with ATP (Sigma; A2383) that drove the activity of the gel. The aqueous dispersion contained a nonadsorbing polymeric agent (PEG, 20 kDa; Sigma; 95172) that promoted the formation of filament bundles through depletion. To maintain a constant concentration of ATP during the experiments, an enzymatic ATP-regenerator system was used, consisting on phosphoenolpyruvate (PEP) (Sigma; P7127) that fueled pyruvate kinase/lactate dehydrogenase (PK/LDH) (Invitrogen; 434301) to convert ADP back into ATP. Several antioxidant components were also included in the solution to avoid protein denaturation, and to minimize photobleaching during characterization by means of fluorescence microscopy. Anti-oxydant solution 1 (AO1) contained 15 mg.mL−1 glucose and 2.5 M DTT. Antioxidant solution 2 contained 10 mg.mL−1 glucose oxydase (Sigma G2133) and 1.75 mg.mL−1 catalase (Sigma, C40). Trolox (Sigma, 238813) was used as an additional antioxidant. A high-salt M2B solution was used to raise the MgCl2 concentration. The PEG-based triblock copolymer surfactant Pluronic F-127 (Sigma; P-2443) was added to procure a biocompatible water/oil interface in subsequent steps. Buffer for stock solutions of PEP, DTT, ATP, PEG and Streptavidin was M2B, and we added 20 mM of K2HPO4 to the buffer of catalase, glucose, glucose oxydase and trolox. A typical recipe is summarized in Table 1.
Table 1 Composition of all stock solutions, and their volume fraction in the final mixture
Statistical information on defect unbinding and defect orientation (Fig. 5) were obtained using the defect detection and tracking Matlab codes developed by Ellis et al.21,22, coupled with a custom-made direction detection Matlab code derived from Vromans and Giomi's method50. PIV measurements in Fig. 3a–c were obtained using confocal images in reflection mode. In this mode, the active nematic layer exhibits textures that can quite efficiently act as tracers for PIV software. The images were treated with ImageJ PIV plugin52. The data was then processed with custom Matlab codes. Flow profiles were computed using a custom ImageJ plugin. PIV measurements failed to give reliable measurements close to the walls for narrow channels because of a substantial drop of resolution that we attribute to artefacts in the reflection mode. Because of this, for a system in the shear flow state, the longitudinal velocity Vx(y) was determined as follows. For each y position, a kymograph is built by assembling the image profile along the channel (x coordinate) at different times. The inhomogeneous reflected-light intensity results in traces in the x − t plane of the kymograph, whose slope gives Vx(y). The average value of this slope is obtained by computing the FFT of the kymograph image and finding at which angle the FFT has the highest intensity. We repeat the same process for each y coordinate, resulting in the desired flow profile. The kymograph in Fig. 3d was obtained with a custom Matlab code using PIV data from ImageJ as an input. Defect spacing in Fig. 2 was obtained by manually counting the number of defects in a given channel, for at least 10 random frames. Velocities were computed using ImageJ plugin Manual Tracking, averaging the velocities of 10 defects per experiment over their lifetime, typically 15 s.
The Eqs. (1) and (2) of the manuscript are solved using a hybrid lattice–Boltzmann (LB) method. A finite difference scheme was used to solve Eq. (2) on a five-point stencil to discretize the derivatives on a square grid. These are coupled to Navier–Stokes equations, that are solved using lattice–Boltzmann method with the Bhatnagar–Gross–Krook (BGK) approximation and a single relaxation time for the collision operator53. The time integration is performed using PECE predictor–corrector method. The discrete space and time steps are chosen as unity for the LB method. The algorithm is implemented using the C++ programming language.
All simulations were performed on a 1800 × w rectangular grid, where w varied from 30 to 160 lattice sites. The boundary conditions for the velocity field along all walls and corners are free-slip. The boundary conditions for the director field are weak planar along the channel walls, which is implemented via the free energy term \(\frac{1}{2}W{\mathrm{Tr}}({\mathbf{Q}} - {\mathbf{Q}}_{\mathrm{D}})^2\). The corners have QD such that the director field aligns at an angle of 45 degrees to the length of the channel, on the front-right and back-left corners, and at an angle of 135 degrees on the front-left and back-right corners. The left and right walls have no free energy term applied, and have Neumann boundary conditions.
The parameters used in the simulations are given in Table 2. Initially, we chose parameters in a range that has previously been successful in reproducing the dynamics of experiments in microtubule bundles54,55, and then further refined the parameters via a search in phase space. All simulations were run for up to 300,000 simulation time steps. Some simulations were performed up to 1,000,000 time steps to further test the stability of the simulations.
Table 2 Parameters used in the simulations
The initial velocity field was set to zero everywhere in the domain. The orientation of the director field was initialised at an angle of either 0, 35 or 90 degrees to the length of the channel, up to some noise. The noise was implemented using the uniform_real_distribution() function in the standard C++ library. In all simulations, the parameters were set to lattice units.
The figures of the simulations in Figs. 1 and 3a–c were created using the programme Paraview. In Fig. 1, the director field was overlaid on the order parameter field q. In Fig. 3a–c, the director field was overlaid on the vorticity field. The kymograph of the velocity components Vx and Vy in Fig. 3d, e was created using Matlab, by calculating the mean value of the velocity components in a subsection of the channel at any given time. The distributions of the defect positions across the channel width in simulations shown in Fig. 5 and Supplementary Fig. 5 were created using Matlab. Each bar represents the normalised number of defects along a strip of lattice sites parallel to the channel length, of width one lattice site. The supplementary movies of the simulations were created using Paraview, and any defect tracking was created by overlaying the Paraview images with markers from Matlab.
In the simulations, λi was calculated as follows. At any given channel width and time-step, the distance between each +1/2 defect and its nearest neighbour was calculated. The mean of this quantity was then taken, to give the mean defect-defect distance at a fixed time. Then an average over time was taken to give λi for a single simulation. This process was repeated for three different simulations with different initial conditions. The initial conditions were distinguished by the initialisation angle of the director field, which was selected to be 0, 45 and 90 degrees to the length of the channel. The mean value of these λi's was then taken to give the value plotted in Fig. 4. The standard deviation of these values were used to get the error bars.
The data that support the findings of this study are available from the corresponding author upon reasonable request.
Source data and codes for active nematic simulations are available from the corresponding author upon reasonable request.
Ramaswamy, S. The mechanics and statistics of active matter. Annl. Rev. Cond. Mat. Phys. 1, 323–345 (2010).
Marchetti, M. C. et al. Hydrodynamics of soft active matter. Rev. Mod. Phys. 85, 1143–1189 (2013).
Friedl, P., Locker, J., Sahai, E. & Segall, J. E. Classifying collective cancer cell invasion. Nat. Cell Biol. 14, 777–783 (2012).
Conrad, J. C. & Poling-Skutvik, R. Confined flow: Consequences and implications for bacteria and biofilms. Annl. Rev. Chem. Biomol. Eng. 9, 175–200 (2018).
Wioland, H., Woodhouse, F. G., Dunkel, J., Kessler, J. O. & Goldstein, R. E. Confinement stabilizes a bacterial suspension into a spiral vortex. Phys. Rev. Lett. 110, 268102 (2013).
Lushi, E., Wioland, H. & Goldstein, R. E. Fluid flows created by swimming bacteria drive self-organization in confined suspensions. PNAS 111, 9733 (2014).
Wioland, H., Woodhouse, F. G., Dunkel, J. & Goldstein, R. E. Ferromagnetic and antiferromagnetic order in bacterial vortex lattices. Nat. Phys. 12, 341–345 (2016).
Wioland, H., Lushi, E. & Goldstein, R. E. Directed collective motion of bacteria under channel confinement. New J. Phys. 18, 075002 (2016).
Deforet, M., Hakim, V., Yevick, H., Duclos, G. & Silberzan, P. Emergence of collective modes and tri-dimensional structures from epithelial confinement. Nat. Commun. 5, 3747 (2014).
Xi, W., Sonam, S., Beng Saw, T., Ladoux, B. & Teck Lim, C. Emergent patterns of collective cell migration under tubular confinement. Nat. Commun. 8, 1517 (2017).
Duclos, G. et al. Spontaneous shear flow in confined cellular nematics. Nat. Phys. 14, 728–732 (2018).
Duclos, G., Garcia, S., Yevick, H. G. & Silberzan, P. Perfect nematic order in confined monolayers of spindle-shaped cells. Soft Matter 10, 2346–2353 (2014).
Duclos, G., Erlenkämper, C., Joanny, J. F. & Silberzan, P. Topological defects in confined populations of spindle-shaped cells. Nat. Phys. 13, 58–62 (2017).
Guillamat, P., Ignés-Mullol, J. & Sagués, F. Taming active turbulence with patterned soft interfaces. Nat. Commun. 8, 564 (2017).
Guillamat, P., Ignés-Mullol, J. & Sagués, F. Control of active nematics with passive liquid crystals. Mol. Crys. Liq. Crys. 646, 226–234 (2017).
Guillamat, P., Hardoüin, J., Prat, B. M., Ignés-Mullol, J. & Sagués, F. Control of active turbulence through addressable soft interfaces. J. Phys.: Cond. Mat. 29, 504003 (2017).
Opathalage, A. et al. Self-organized dynamics and the transition to turbulence of confined active nematics. Proc. Natl Acad. Sci. USA 116, 4788–4797 (2019).
Keber, F. C. et al. Topology and dynamics of active nematic vesicles. Science 345, 1135–1139 (2014).
Suzuki, K., Miyazaki, M., Takagi, J., Itabashi, T. & Ishiwata, S. Spatial confinement of active microtubule networks induces large-scale rotational cytoplasmic flow. PNAS 114, 2922–2927 (2017).
Guillamat, P. et al. Active nematic emulsions. Sci. Adv. 4, eaao1470 (2018).
Ellis, P. W. et al. Curvature-induced defect unbinding and dynamics in active nematic toroids. Nat. Phys. 14, 85–90 (2018).
Wu, K. T. et al. Transition from turbulent to coherent flows in confined three-dimensional active fluids. Science 355, 6331 (2017).
MathSciNet Google Scholar
Ravnik, M. & Yeomans, J. M. Confined active nematic flow in cylindrical capillaries. Phys. Rev. Lett. 110, 026001 (2013).
Whitfield, C. A., Marenduzzo, D., Voituriez, R. & Hawkins, R. J. Active polar fluid flow in finite droplets. Eur. Phys. J. E 37, 8 (2014).
Giomi, L. & DeSimone, A. Spontaneous division and motility in active nematic droplets. Phys. Rev. Lett. 112, 147802 (2014).
Sknepnek, R. & Henkes, S. Active swarms on a sphere. Phys. Rev. E 91, 022306 (2015).
Zhang, R., Zhou, Y., Rahimi, M. & De Pablo, J. J. Dynamic structure of active nematic shells. Nat. Commun. 7, 13483 (2016).
Alaimo, F., Köhler, C. & Voigt, A. Curvature controlled defect dynamics in topological active nematics. Sci. Rep. 7, 5211 (2017).
Sanchez, T., Chen, D. T., DeCamp, S. J., Heymann, M. & Dogic, Z. Spontaneous motion in hierarchically assembled active matter. Nature 491, 431–434 (2012).
Voituriez, R., Joanny, J. F. & Prost, J. Spontaneous flow transition in active polar gels. Europhys. Lett. 70, 404–410 (2005).
Shendruk, T. N., Doostmohammadi, A., Thijssen, K. & Yeomans, J. M. Dancing disclinations in confined active nematics. Soft Matter 13, 3853 (2017).
Guillamat, P., Ignés-Mullol, J., Shankar, S., Marchetti, M. C. & Sagués, F. Probing the shear viscosity of an active nematic film. Phys. Rev. E 94, 060602 (2016).
Prost, J., Jülicher, F. & Joanny, J. F. Active gel physics. Nat. Phys. 11, 111–117 (2015).
Doostmohammadi, A., Ignés-Mullol, J., Yeomans, J. M. & Sagués, F. Active nematics. Nat. Commun. 9, 3246 (2018).
Beris, A. N. & Edwards, B. J. Thermodynamics of Flowing Systems: With Internal Microstructure (Oxford University Press, 1994).
Thampi, S. P., Doostmohammadi, A., Golestanian, R. & Yeomans, J. M. Intrinsic free energy in active nematics. Europhys. Lett. 112, 28004 (2015).
Marenduzzo, D., Orlandini, E., Cates, M. E. & Yeomans, J. M. Steady-state hydrodynamic instabilities of active liquid crystals: Hybrid lattice Boltzmann simulations. Phys. Rev. E 76, 031921 (2007).
Thampi, S. P., Golestanian, R. & Yeomans, J. M. Vorticity, defects and correlations in active turbulence. Philos. Trans. A: Math. Phys. Eng. Sci. 372, 0366 (2014).
Giomi, L. Geometry and topology of turbulence in active nematics. Phys. Rev. X 5, 031003 (2015).
Chaté, H., Ginelli, F. & Montagne, R. Simple model for active nematics: quasi-long-range order and giant fluctuations. Phys. Rev. Lett. 96, 180602 (2006).
Ginelli, F., Peruani, F., Bär, M. & Chaté, H. Large-scale collective properties of self-propelled rods. Phys. Rev. Lett. 104, 184502 (2010).
Simha, R. A. & Ramaswamy, S. Hydrodynamic fluctuations and instabilities in ordered suspensions of self-propelled particles. Phys. Rev. Lett. 89, 058101 (2002).
Ramaswamy, S. & Rao, M. Active-filament hydrodynamics: instabilities, boundary conditions and rheology. New J. Phys. 9, 423 (2007).
Guillamat, P., Ignés-Mullol, J. & Sagués, F. Control of active liquid crystals with a magnetic field. PNAS 113, 5498–5502 (2016).
Pismen, L. M. Dynamics of defects in an active nematic layer. Phys. Rev. E 88, 050502 (2013).
Denniston, C. Disclination dynamics in nematic liquid crystals. Phys. Rev. B 54, 6272 (1996).
Norton, M. M. et al. Insensitivity of active nematic liquid crystal dynamics to topological constraints. Phys. Rev. E 97, 012702 (2018).
Gao, T. & Li, Z. Self-driven droplet powered by active nematics. Phys. Rev. Lett. 119, 108002 (2017).
DeCamp, S. J., Redner, G. S., Baskaran, A., Hagan, M. F. & Dogic, Z. Orientational order of motile defects in active nematics. Nat. Matter 14, 1110–1115 (2015).
Vromans, A. J. & Giomi, L. Orientational properties of nematic disclinations. Soft Matter 12, 6490–6495 (2016).
Srivastava, P., Mishra, P. & Marchetti, M. C. Negative stiffness and modulated states in active nematics. Soft matter 12, 8214–8225 (2016).
Tseng, Q. et al. Spatial organization of the extracellular matrix regulates cell–cell junction positioning. Proc. Natl. Acad. Sci. 109, 1506–1511 (2012).
Krüger, T. et al. The lattice Boltzmann method. Springe. Int. Publ. 10, 978–3 (2017).
Melaugh, G. et al. Shaping the growth behaviour of biofilms initiated from bacterial aggregates. PloS one 11, e0149683 (2016).
Lloyd, D. P. & Allen, R. J. Competition for space during bacterial colonization of a surface. J. R. Soc. Interface 12, 20150608 (2015).
Loring, B., Karimabadi, H. & Rortershteyn, V. A screen space gpgpu surface lic algorithm for distributed memory data parallel sort last rendering infrastructures. Tech. Rep. (Lawrence Berkeley National Lab. (LBNL), Berkeley, CA, USA, 2014).
We are indebted to the Brandeis University Materials Research Science and Engineering Centers (MRSEC) Biosynthesis facility for providing the tubulin. We thank M. Pons, A. LeRoux, G. Iruela, P. Guillamat and B. Martínez-Prat (University of Barcelona) for assistance in the expression of motor proteins. We also thank P. Ellis and A. Fernandez-Nieves for kindly sharing all their image processing and defect detection algorithms. J.H., J.I.-M and F.S. acknowledge funding from Ministerio de Economia, Industria y Competitividad, Spain (project FIS2016-78507-C2-1-P, Agencia Estatal de Investigación/European Regional Development Fund). J.H. acknowledges funding from the European Union's Horizon 2020 research and innovation program under grant agreement no. 674979- NANOTRANS. Brandeis University MRSEC Biosynthesis facility is supported by NSF MRSEC DMR-1420382. T.L-L. acknowledges funding from the French Agence Nationale de la Recherche (Ref. ANR-13-JS08-006-01). Gulliver laboratory acknowledges funding from 2015 Grant SESAME MILAMIFAB (Ref. 15013105) for the acquisiton of a Nanoscribe GT Photonic Professional device. R.H. acknowledges that this research was supported by the European Union's Horizon 2020 research and innovation programme under the Marie Sklodowska-Curie grant agreement No 722497 - LubISS.
These authors contributed equally: Jérôme Hardoüin, Rian Hughes
Departament de Química Física, Universtitat de Barcelona, 08028, Barcelona, Spain
Jérôme Hardoüin, Jordi Ignés-Mullol & Francesc Sagués
Institute of Nanoscience and Nanotechnology, Universtitat de Barcelona, 08028, Barcelona, Spain
The Rudolf Peierls Centre for Theoretical Physics, Clarendon Laboratory, Parks Road, Oxford, OX1 3PU, UK
Rian Hughes, Amin Doostmohammadi & Julia M. Yeomans
Laboratoire de Physique et Mécanique des Milieux hétérogènes (PMMH), UMR CNRS 7636, ESPCI Paris, PSL Research University, Paris, France
Justine Laurent
Sorbonne Université, Univ. Paris Diderot, Paris, France
Laboratoire Gulliver, UMR CNRS 7083, ESPCI Paris, PSL Research University, Paris, France
Teresa Lopez-Leon
Jérôme Hardoüin
Rian Hughes
Amin Doostmohammadi
Julia M. Yeomans
Jordi Ignés-Mullol
Francesc Sagués
J.H., J.L., F.S. and J.I.-M. conceived the experiments and R.H., A.D. and J.M.Y. designed the simulations. J.H. performed the experiments in collaboration with T.L.-L. and analyzed the experimental data. R.H. conducted the numerical simulations. J.H., R.H. and J.M.Y. wrote the paper with contribution from all the authors.
Correspondence to Julia M. Yeomans or Jordi Ignés-Mullol.
Description of Additional Supplementary Items
Supplementary Movie 1
Hardoüin, J., Hughes, R., Doostmohammadi, A. et al. Reconfigurable flows and defect landscape of confined active nematics. Commun Phys 2, 121 (2019). https://doi.org/10.1038/s42005-019-0221-x
Active boundary layers in confined active nematics
Jerôme Hardoüin
Claire Doré
Filopodia rotate and coil by actively generating twist in their actin shaft
Natascha Leijnse
Younes Farhangi Barooji
Poul Martin Bendix
Topological active matter
Suraj Shankar
Anton Souslov
Vincenzo Vitelli
Zhi-Feng Huang
Hartmut Löwen
Axel Voigt
Alfredo Sciortino
Lukas J. Neumann
Andreas R. Bausch
Nature Materials (2022)
Editor's Highlights - Communications Physics
Focus Collections
Communications Physics (Commun Phys) ISSN 2399-3650 (online)
|
CommonCrawl
|
A fast algorithm for the semi-definite relaxation of the state estimation problem in power grids
JIMO Home
A $p$-spherical section property for matrix Schatten-$p$ quasi-norm minimization
January 2020, 16(1): 409-429. doi: 10.3934/jimo.2018160
A hybrid chaos firefly algorithm for three-dimensional irregular packing problem
Chuanxin Zhao 1, , Lin Jiang 2,, and Kok Lay Teo 2,
School of Computer Sciences and information, Anhui Normal University, Anhui Provincial Key Laboratory of Network and Information Security, Wuhu, 241000, China
Department of Mathematics and Statistics, Curtin University, Perth, WA 6845, Australia
* Corresponding author: Lin Jiang
Received March 2017 Revised April 2018 Published October 2018
Fund Project: The first author is supported by NSFC grant (61871412, 61772034, 61572036, 61672039, 61473326), Anhui Provincial Natural Science Foundation(1708085MF156, 1808085MF172), Australian Research Council Linkage Program LP140100873
The packing problem study how to pack multiple objects without overlap. Various exact and approximate algorithms have been developed for two-dimensional regular and irregular packing as well as three-dimensional bin packing. However, few results are reported for three-dimensional irregular packing problems. This paper will develop a method for solving three-dimensional irregular packing problems. A three-grid approximation technique is first introduced to approximate irregular objects. Then, a hybrid heuristic method is developed to place and compact each individual objects where chaos search is embedded into firefly algorithm in order to enhance the algorithm's diversity for optimizing packing sequence and orientations. Results from several computational experiments demonstrate the effectiveness of the hybrid algorithm.
Keywords: Irregular packing, raster approximation, firefly algorithm, chaos search.
Mathematics Subject Classification: Primary: 80M50; Secondary: 90C27.
Citation: Chuanxin Zhao, Lin Jiang, Kok Lay Teo. A hybrid chaos firefly algorithm for three-dimensional irregular packing problem. Journal of Industrial & Management Optimization, 2020, 16 (1) : 409-429. doi: 10.3934/jimo.2018160
Y. Abdelsadek, F. Herrmann, I. Kacem and B. Otjacques, Branch-and-bound algorithm for the maximum triangle packing problem, Computers & Industrial Engineering, 81 (2015), 147-157. doi: 10.1016/j.cie.2014.12.006. Google Scholar
R. Alvarez-Valdes, A. Martinez and J. M. Tamarit, A branch and bound algorithm for cutting and packing irregularly shaped pieces, Computers & Operations Research, 145 (2013), 463-477. doi: 10.1016/j.ijpe.2013.04.007. Google Scholar
I. Araya and M. C. Riff, A beam search approach to the container loading problem, Computers & Operations Research, 43 (2014), 100-107. doi: 10.1016/j.cor.2013.09.003. Google Scholar
T. Back, Evolutionary algorithms in theory and practice: Evolution strategies, evolutionary programming, genetic algorithms, The Clarendon Press, Oxford University Press, New York, 1996. Google Scholar
J. C. Bean, Genetic algorithms and random keys for sequencing and optimization, ORSA Journal on Computing, 6 (1994), 150-160. doi: 10.1287/ijoc.6.2.154. Google Scholar
E. K. Burke, R. S. R. Hellier, G. Kendall and G. Whitwell, Irregular packing using the line and arc no-fit polygon, Operations Research, 58 (2010), 948-970. doi: 10.1287/opre.1090.0770. Google Scholar
P. Chen, Z. Fu, A. Lim and B. Rodrigues, The two dimensional packing problem for irregular objects, International Journal on Artificial Intelligent Tools, 13 (2004), 429-448. doi: 10.1142/S0218213004001624. Google Scholar
N. Chernov, Y. Stoyan and T. Romanova, Mathematical model and efficient algorithms for object packing problem, Computational Geometry, 43 (2010), 535-553. doi: 10.1016/j.comgeo.2009.12.003. Google Scholar
D. Cinar, J. A. Oliveira, Y. I. Topcu and P. M. Pardalos, A priority-based genetic algorithm for a flexible job shop scheduling problem, Journal of Industrial and Management Optimization, 12 (2016), 1391-1415. doi: 10.3934/jimo.2016.12.1391. Google Scholar
L. D. S. Coelho and V. C. Mariani, Firefly algorithm approach based on chaotic Tinkerbell map applied to multivariable PID controller tuning, Computers & Mathematics with Applications, 64 (2012), 2371-2382. doi: 10.1016/j.camwa.2012.05.007. Google Scholar
T. Dereli and G. S. Das, A hybrid 'bee(s) algorithm' for solving container loading problems, Applied Soft Computing, 11 (2011), 2854-2862. doi: 10.1016/j.asoc.2010.11.017. Google Scholar
T. Dokeroglu and A. Cosar, Optimization of one-dimensional Bin Packing Problem with island parallel grouping genetic algorithms, Computers & Industrial Engineering, 75 (2014), 176-186. doi: 10.1016/j.cie.2014.06.002. Google Scholar
A. Elkeran, A new approach for sheet nesting problem using guided cuckoo search and pairwise clustering, European Journal of Operational Research, 231 (2013), 757-769. doi: 10.1016/j.ejor.2013.06.020. Google Scholar
I. Fister, S. M. Kamal and I. Fister, A review of chaos-based firefly algorithms: Perspectives and research challenges, Applied Mathematics and Computation, 252 (2015), 155-165. doi: 10.1016/j.amc.2014.12.006. Google Scholar
J. F. Gonçalves and M. G. Resende, A biased random key genetic algorithm for 2D and 3D bin packing problems, International Journal of Production Economics, 145 (2013), 500-510. Google Scholar
H. Hasni and H. Sabri, On a hybrid Genetic Algorithm for solving the container loading problem with no orientation constraints, Journal of Mathematical Modelling and Algorithms in Operations Research, 12 (2013), 67-84. Google Scholar
W. Huang and K. He, A new heuristic algorithm for cuboids packing with no orientation constraints, Computers & Operations Research, 36 (2009), 425-432. doi: 10.1016/j.cor.2007.09.008. Google Scholar
R. Karmakar and S. Chattopadhyay, Window-based peak power model and Particle Swarm Optimization guided 3-dimensional bin packing for SoC test scheduling, Integration, the VLSI Journal, 50 (2015), 61-73. doi: 10.1016/j.vlsi.2015.01.006. Google Scholar
P. Laxmi, S. Indira and K. Jyothsna, Ant colony optimization for optimum service times in a Bernoulli schedule vacation interruption queue with balking and reneging, Journal of Industrial and Management Optimization, 12 (2016), 1199-1214. doi: 10.3934/jimo.2016.12.1199. Google Scholar
T. W. Leung, K. C. Chi and M. D. Troutt, A mixed simulated annealing-genetic algorithm approach to the multi-buyer multi-item joint replenishment problem: advantages of meta-heuristics, Journal of Industrial and Management Optimization, 4 (2008), 53-66. doi: 10.3934/jimo.2008.4.53. Google Scholar
S. C. H. Leung, Y. Lin and D. Zhang, Extended local search algorithm based on nonlinear programming for two-dimensional irregular strip packing problem, Computers & Operations Research, 39 (2012), 678-686. doi: 10.1016/j.cor.2011.05.025. Google Scholar
Y. K. Lin and C. S. Chong, A tabu search algorithm to minimize total weighted tardiness for the job shop scheduling problem, Journal of Industrial and Management Optimization, 12 (2016), 703-717. doi: 10.3934/jimo.2016.12.703. Google Scholar
A. Lodi, S. Martello and M. Monaci, Two dimensional packing problems: A survey, European Journal of Operational Research, 141 (2002), 241-252. doi: 10.1016/S0377-2217(02)00123-6. Google Scholar
Q. Long and C. Wu, A hybrid method combining genetic algorithm and Hooke-Jeeves method for constrained global optimization, Journal of Industrial and Management Optimization, 10 (2014), 1279-1296. doi: 10.3934/jimo.2014.10.1279. Google Scholar
Q. Long, C. Wu, T. Huang and X. Wang, A genetic algorithm for unconstrained multi-objective optimization, Swarm and Evolutionary Computation, 22 (2015), 1-14. doi: 10.1016/j.swevo.2015.01.002. Google Scholar
S. Martello and M. Monaci, Models and algorithms for packing rectangles into the smallest square, Computers & Operations Research, 63 (2015), 161-171. doi: 10.1016/j.cor.2015.04.024. Google Scholar
A. Moura and J. F. Oliveira, Optimization of one-dimensional Bin Packing Problem with island parallel grouping genetic algorithms, Computers & Industrial Engineering, 75 (2014), 176-186. Google Scholar
E. Osaba, X. S. Yang, F. Diaz, E. Onieva, A. D. Masegosa and A. Perallos, A discrete firefly algorithm to solve a rich vehicle routing problem modelling a newspaper distribution system with recycling policy, Soft Computing, 21 (2017), 5295-5308. doi: 10.1007/s00500-016-2114-1. Google Scholar
J. Senthilnath, S. N. Omkar and V. Mani, Clustering using firefly algorithm: Performance study, Swarm and Evolutionary Computation, 1 (2011), 164-171. doi: 10.1016/j.swevo.2011.06.003. Google Scholar
S. Tao, C. Wu, Z. Sheng and X. Wang, Stochastic Project Scheduling with Hierarchical Alternatives, Applied Mathematical Modelling, 58 (2018), 181-202. doi: 10.1016/j.apm.2017.09.015. Google Scholar
S. Tao, C. Wu, Z. Sheng and X. Wang, Space-Time Repetitive Project Scheduling Considering Location and Congestion, Journal of Computing in Civil Engineering, 32 (2018), 04018017. doi: 10.1061/(ASCE)CP.1943-5487.0000745. Google Scholar
H. Terashima-Marín, P. Ross, C. J. Farías-Zárate, E. López-Camacho and M. Valenzuela-Rendón, Generalized hyper-heuristics for solving 2D Regular and irregular Packing Problems, Annals of Operations Research, 179 (2010), 369-392. doi: 10.1007/s10479-008-0475-2. Google Scholar
A. Trivella and D. Pisinger, The load-balanced multi-dimensional bin-packing problem, Computers & Operations Research, 74 (2014), 152-164. doi: 10.1016/j.cor.2016.04.020. Google Scholar
W. K. Wong, X. X. Wang, P. Y. Mok, S. Y. S. Leung and C. K. Kwong, Solving the two-dimensional irregular objects allocation problems by using a two-stage packing approach, Expert Systems with Applications, 36 (2009), 3489-3496. doi: 10.1016/j.eswa.2008.02.068. Google Scholar
X. S. Yang, Swarm-based metaheuristic algorithms and no-free-lunch theorems, in Theory and New Applications of Swarm Intelligence(Eds. R. Parpinelli and H. S.Lopes), Intech Open Science, 192 (2012), 1-2. doi: 10.1016/j.ins.2012.02.002. Google Scholar
X. S. Yang and X. He, Firefly algorithm: Recent advances and applications, International Journal of Swarm Intelligence, 1 (2013), 36-50. doi: 10.1504/IJSI.2013.055801. Google Scholar
M. A. Zaman and M. A. Matin, Nonuniformly spaced linear antenna array design using firefly algorithm, International Journal of Microwave Science and Technology, 2012 (2012), Article ID 256759, 8 pages. doi: 10.1155/2012/256759. Google Scholar
C. Zhao, C. Wu, J. Chai, X. Wang, X. Yang and J. M. Lee, et al., Decomposition-based multi-objective firefly algorithm for RFID network planning with uncertainty, Applied Soft Computing, 55 (2017), 549-564. Google Scholar
C. Zhao, C. Wu, X. Wang, W. K. Ling, K. L.Teo and J. M. Lee, et al., Maximizing lifetime of a wireless sensor network via joint optimizing sink placement and sensor-to-sink routing, Applied Mathematical Modelling, 49 (2017), 319-337. doi: 10.1016/j.apm.2017.05.001. Google Scholar
Figure 1. Grid representation of a cylinder
Figure 2. Encoding of the firefly individual
Figure 3. A procedure of back bottom left placement for 3D irregular packing
Figure 4. Architecture of the chaos firefly algorithm for packing problem
Figure 5. An example of comparison between random sequence packing result and optimized sequence packing result
Figure 6. The comparison packing result of instance 6 without rotation
Figure 7. The comparison packing result of instance 8 with rotation
Figure 8. Algorithm convergence over 9 instances
Table 1. Data sets specification
Instance Type Number Rotation Object scale
1 cylinder 1 50 0 ${50^ * }{50^ * }50$
3 cylinder 3 100 0 ${50^ * }{50^ * }50$
4 irregular 1 (cylinder, complex structure) 50 0 ${50^ * }{50^ * }50$
6 irregular 3 (cylinder, complex structure) 100 0 ${50^ * }{50^ * }50$
7 irregular 4 (cylinder, complex structure) 40 0, 3 ${50^ * }{50^ * }50$
Table 2. Parameters of the hybrid firefly algorithm
Parameter Value
population size $number\times$(1+$r_{max}$)
$T_0$ 0.064
Temperature update ratio 1.6
Iteration 300
Table 3. The maximal height and the efficiency achieved by three algorithms in 10 runs
Instance GA PSO FA HFA
Height efficiency Height efficiency Height efficiency Height efficiency
1 122 52.5% 120 55.4% 117 56.8% 120 55.4%
Table 4. The statistical performance of the algorithm without rotation
Ins. GA PSO FA HFA
Best Avg Stdev Best Avg Stdev Best Avg Stdev Best Avg Stdev
1 120 122.1 2.172 120 121.3 1.341 117 120.8 2.049 120 120.3 0.547
2 171 172.5 2.918 171 172.1 2.121 170 172.5 3.140 167 170 2.387
8 232 245.1 6.901 231 242 7.615 234 244.2 3.421 225 234.2 2.863
Table 5. Comparison between the proposed approach and placement strategy
Instance enclosure without rotation enhance(%) with rotation enhance(%)
1 124.2 120.3 3.14% N/A N/A
4 225.2 215.2 4.44% 210.2 2.32%
Mohamed A. Tawhid, Ahmed F. Ali. An effective hybrid firefly algorithm with the cuckoo search for engineering optimization problems. Mathematical Foundations of Computing, 2018, 1 (4) : 349-368. doi: 10.3934/mfc.2018017
Maolin Cheng, Mingyin Xiang. Application of a modified CES production function model based on improved firefly algorithm. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-14. doi: 10.3934/jimo.2019018
Jianjun Liu, Min Zeng, Yifan Ge, Changzhi Wu, Xiangyu Wang. Improved Cuckoo Search algorithm for numerical function optimization. Journal of Industrial & Management Optimization, 2020, 16 (1) : 103-115. doi: 10.3934/jimo.2018142
Mao Chen, Xiangyang Tang, Zhizhong Zeng, Sanya Liu. An efficient heuristic algorithm for two-dimensional rectangular packing problem with central rectangle. Journal of Industrial & Management Optimization, 2020, 16 (1) : 495-510. doi: 10.3934/jimo.2018164
Leong-Kwan Li, Sally Shao. Convergence analysis of the weighted state space search algorithm for recurrent neural networks. Numerical Algebra, Control & Optimization, 2014, 4 (3) : 193-207. doi: 10.3934/naco.2014.4.193
Behrouz Kheirfam, Morteza Moslemi. On the extension of an arc-search interior-point algorithm for semidefinite optimization. Numerical Algebra, Control & Optimization, 2018, 8 (2) : 261-275. doi: 10.3934/naco.2018015
Kien Ming Ng, Trung Hieu Tran. A parallel water flow algorithm with local search for solving the quadratic assignment problem. Journal of Industrial & Management Optimization, 2019, 15 (1) : 235-259. doi: 10.3934/jimo.2018041
Weihua Liu, Andrew Klapper. AFSRs synthesis with the extended Euclidean rational approximation algorithm. Advances in Mathematics of Communications, 2017, 11 (1) : 139-150. doi: 10.3934/amc.2017008
Ming-Jong Yao, Tien-Cheng Hsu. An efficient search algorithm for obtaining the optimal replenishment strategies in multi-stage just-in-time supply chain systems. Journal of Industrial & Management Optimization, 2009, 5 (1) : 11-32. doi: 10.3934/jimo.2009.5.11
Abdel-Rahman Hedar, Ahmed Fouad Ali, Taysir Hassan Abdel-Hamid. Genetic algorithm and Tabu search based methods for molecular 3D-structure prediction. Numerical Algebra, Control & Optimization, 2011, 1 (1) : 191-209. doi: 10.3934/naco.2011.1.191
Y. K. Lin, C. S. Chong. A tabu search algorithm to minimize total weighted tardiness for the job shop scheduling problem. Journal of Industrial & Management Optimization, 2016, 12 (2) : 703-717. doi: 10.3934/jimo.2016.12.703
Leong-Kwan Li, Sally Shao, K. F. Cedric Yiu. Nonlinear dynamical system modeling via recurrent neural networks and a weighted state space search algorithm. Journal of Industrial & Management Optimization, 2011, 7 (2) : 385-400. doi: 10.3934/jimo.2011.7.385
Behrad Erfani, Sadoullah Ebrahimnejad, Amirhossein Moosavi. An integrated dynamic facility layout and job shop scheduling problem: A hybrid NSGA-II and local search algorithm. Journal of Industrial & Management Optimization, 2017, 13 (5) : 1-34. doi: 10.3934/jimo.2019030
David Julitz. Numerical approximation of atmospheric-ocean models with subdivision algorithm. Discrete & Continuous Dynamical Systems - A, 2007, 18 (2&3) : 429-447. doi: 10.3934/dcds.2007.18.429
Zhenbo Wang. Worst-case performance of the successive approximation algorithm for four identical knapsacks. Journal of Industrial & Management Optimization, 2012, 8 (3) : 651-656. doi: 10.3934/jimo.2012.8.651
Gaidi Li, Zhen Wang, Dachuan Xu. An approximation algorithm for the $k$-level facility location problem with submodular penalties. Journal of Industrial & Management Optimization, 2012, 8 (3) : 521-529. doi: 10.3934/jimo.2012.8.521
Tao Zhang, Yue-Jie Zhang, Qipeng P. Zheng, P. M. Pardalos. A hybrid particle swarm optimization and tabu search algorithm for order planning problems of steel factories based on the Make-To-Stock and Make-To-Order management architecture. Journal of Industrial & Management Optimization, 2011, 7 (1) : 31-51. doi: 10.3934/jimo.2011.7.31
Meng Yu, Jack Xin. Stochastic approximation and a nonlocally weighted soft-constrained recursive algorithm for blind separation of reverberant speech mixtures. Discrete & Continuous Dynamical Systems - A, 2010, 28 (4) : 1753-1767. doi: 10.3934/dcds.2010.28.1753
Chenchen Wu, Dachuan Xu, Xin-Yuan Zhao. An improved approximation algorithm for the $2$-catalog segmentation problem using semidefinite programming relaxation. Journal of Industrial & Management Optimization, 2012, 8 (1) : 117-126. doi: 10.3934/jimo.2012.8.117
Jinling Wei, Jinming Zhang, Meishuang Dong, Fan Zhang, Yunmo Chen, Sha Jin, Zhike Han. Applications of mathematics to maritime search. Discrete & Continuous Dynamical Systems - S, 2019, 12 (4&5) : 957-968. doi: 10.3934/dcdss.2019064
HTML views (1414)
Chuanxin Zhao Lin Jiang Kok Lay Teo
|
CommonCrawl
|
Long scan depth optical coherence tomography on imaging accommodation: impact of enhanced axial resolution, signal-to-noise ratio and speed
Yilei Shao1,2,
Aizhu Tao1,2,
Hong Jiang1,
Meixiao Shen2,
Dexi Zhu2,
Fan Lu2,
Carol L. Karp1,
Yufeng Ye1,3 &
Jianhua Wang1,4,5
Spectral domain optical coherence tomography (SD-OCT) was a useful tool to study accommodation in human eye, but the maximum image depth is limited due to the decreased signal-to-noise ratio (SNR). In this study, improving optical resolutions, speeds and the SNR were achieved by custom built SD-OCT, and the evaluation of the impact of the improvement during accommodation was investigated.
Three systems with different spectrometer designs, including two Charge Coupled Device (CCD) cameras and one Complementary Metal-Oxide-Semiconductor Transistor (CMOS) camera, were tested. We measured the point spread functions of a mirror at different positions to obtain the axial resolution and the SNR of three OCT systems powered with a light source with a 50 nm bandwidth, centered at a wavelength of 840 nm. Two normal subjects, aged 26 and 47, respectively, and one 75-year-old patient with an intraocular lens implanted were imaged.
The results indicated that spectrometers using cameras with 4096 camera pixels optimized the axial resolutions, due to the use of the full spectrum provided by the light source. The CCD camera system with 4096 pixels had the highest SNR and the best image quality. The system with the CMOS camera with 4096 pixels had the highest speed but had a compromised SNR compared to the CCD camera with 4096 pixels.
Using these three OCT systems, we imaged the anterior segment of the human eye before and after accommodation, which showed similar results among the different systems. The system using the CMOS camera with an ultra-long scan depth, high resolution and high scan speed exhibited the best overall performance and therefore was recommended for imaging real-time accommodation.
In the human eye, accommodation is the ability to provide clear vision during near tasks by increasing refractive power. With presbyopia and cataracts, the ability of the accommodation reduces [1]. Research to understand the mechanism of accommodation and to recover accommodative ability has attracted great interest among ophthalmic and optometric researchers. The accommodation apparatus located in the ocular anterior segment is a key component that generates the refractive power to focus on close targets [2, 3]. Biometry of the anterior segment is therefore critical to understand the mechanism of accommodation and discover the effective restoration of accommodation. Several techniques are available for imaging the ocular anterior segment in vivo including Scheimpflug photography, ultrasound biomicroscopy (UBM), magnetic resonance imaging (MRI), Purkinje imaging and optical coherence tomography (OCT) [4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23,24,25,26]. There are advantages and disadvantages for each of these approaches. Ultrasound can be used with water baths that may distort or depress the anterior surface and change the biometric measurements [8]. Scheimpflug photography requires dilation, a non-physiologic condition that limits the use of this method for studying accommodation, and Scheimpflug photography results in low-resolution [4,5,6]. Compared with other methods, MRI is a non-optical imaging technique with high cost and low resolution. It is relatively time-consuming, making it difficult to obtain dynamic images [5].
OCT is a non-contact, non-invasive technology with high scan speeds and high axial resolution. The spectral domain OCT (SD-OCT) has the capability to image accommodation in both static and dynamic states [10,11,12,13,14,15,16,17,18,19,20,21,22,23, 25]. However, the maximum image depth is limited due to the decreased signal-to-noise ratio (SNR) in SD-OCT, which prevents the wide use of SD-OCT with long scan depths. The ideal SD-OCT requires a good SNR through the entire scan depth and a good imaging resolution for the entire axial range of the anterior segment. The whole anterior segment image, which includes the cornea, the anterior chamber and the crystalline lens, is essential for optical correction of the images and automatic surface registration/detection to obtain biometric measurements. The dual channel approach and image switching were used to extend scan depth [16, 20, 27]. Recently, we reported a method to improve the SNR by overlapping two images acquired with an ultra-long scan depth SD-OCT with two alternative reference arm lengths for imaging the entire anterior segment in vivo [20, 25]. Using this method, the range of scan depth with normalized SNR reached more than 11 mm, which was enough to image the axial range of the entire anterior segment. Our previous approach with the spectrometer using a Charge Coupled Device (CCD) camera with 2048 camera pixels had a trade-off because only a portion of the full spectrum provided by the light source was used in trading the scan depth [20, 25]. In addition, the scan speed of our previous study was slow due to the speed limitation of the CCD camera used. As demonstrated in the literature, the latest Complementary Metal-Oxide-Semiconductor Transistor (CMOS) technology attained faster imaging speeds compared with the CCD technology. However, CMOS may be subject to lower sensitivity and higher noise [28]. Before further improvement on spectrometer designs can be materialized for imaging the entire anterior segment, the impact of axial resolution, SNR and speed with different spectrometer designs need to be better understood. The goal of this present work was to demonstrate the impact of these spectrometer designs on image qualities in the anterior segment biometry during accommodation.
OCT systems and performance
We tested three systems with different spectrometer designs including two CCD cameras and one CMOS camera. These three systems were based on the Michelson interferometer, which consists of a light source, a reference arm, a sample arm and a spectrometer, as diagrammed in Fig. 1. A superluminescent diode (SLD, InPhenix, IPSDD0808, Livermore, CA, USA) centered at a wavelength of 840 nm with a full-width at half maximum bandwidths of 50 nm was used as the light source. The power of incident light on the corneal surface of the human eye was 1.25 mW, which was well below the safe ANSI Z136.1 cut-off value. The beam was split into the sample arm and the reference arm using a 50:50 fiber coupler.
A schematic diagram depicting the spectral-domain OCT systems. SLD: superluminescent diode, OI: isolator, FC: fiber coupler, PC: polarization controller, CL1–3: collimating lenses, DC: dispersion compensator, L1–4: objective lenses, M1–2: refractive mirror, GM: galvanometer mirror, LCD: liquid-crystal display, DG: diffraction grating, CA: camera (CCD with 2048 pixels for system 1, CCD with 4096 pixels for system 2 and CMOS with 4096 pixels for system 3)
The three systems had a similar spectrometer design composed of four parts: a collimating lens (f = 50 mm, OZ Optics, Ottawa, Canada), a 1800 lines/mm volume holography transmission grating, an image enlargement lens with a focal length of 240 mm (f = 240 mm, Schneider Optics, Hauppauge, NY), and a line array camera. The three spectrometers were based on cameras with different data transfer rates and scan speeds (Table 1). The acquired interference spectrum data were transferred using the image acquisition board (PCI-1428 for system 1 and PCIe-1429 for systems 2 and 3, National Instruments, Austin, TX). A computer from Hewlett-Packard with an 8 GB RAM memory, an Intel Core 2 Quad processor and a Windows 7 64-bit operation system was used for the control and data acquisition of the OCT instruments. All OCT data acquisition drivers were developed in Labview (Version 2011, National Instruments, Austin, TX).
Table 1 Comparison of the different cameras used in the three optical coherence tomography systems
Figure 2a illustrates the spectrum of the light source captured by the three OCT systems. The calculated spectral resolution was 0.015 nm, which corresponds to a detectable scan depth of 11.76 mm in the air. The system performance including the real axial resolution and sensitivity were characterized by imaging a mirror in the sample arm at different positions. A neutral density filter with an optical density (OD) of 2.0 reduced the signal intensity. As mentioned elsewhere [12, 29], the resolution is indicated by the bandwidth of the point spread function (PSF). The signal intensity is represented with Fourier transformation in a logarithmic scale and sensitivity was calculated from SNR as
$$ sensitivity=10\times \log \left(\frac{S}{\sigma}\right)+20\times OD $$
where S is the signal peak, σ is the noise, and OD is 2.0 in this study.
Spectrum of the light source captured by the three different systems (a) and the point spread functions (PSF) obtained using the three systems at a path difference of 0.5 mm (b). a: The areas of the available pixels from the cameras are indicated in blue (CCD with 2048 pixels), red (CCD with 4096 pixels) and green (CMOS with 4096 pixels) rectangles, respectively. b: Blue, the PSF of system 1 with the measured resolution of 10.9 μm in air; Red, the PSF of system 2 with the measured resolution of 7.0 μm in air; Green, of system 3 with the measured resolution of 7.0 μm in air
System 1 was based on our previously designed spectrometer and measured a scan depth of 12.34 mm. The scan speed was up to 24,000 A-scans per second, which was limited by the CCD line scan camera (2048 pixels; pixel size 10 μm; Aviiva-SM2010; E2V Technologies, NY, USA). The axial resolution was approximately 10.4 μm in air (Fig. 2b, blue line). The maximum sensitivity was 101 dB near the zero delay line with a 61 dB sensitivity drop at 11 mm (Fig. 3, blue line).
The sensitivity of the three systems measured at different image depths from the zero-delay line. Blue line, system 1 with CCD 2048 pixels; red line, system 2 with CCD 4096 pixels; green line, system 3 with CMOS. The solid line was the combined sensitivity acquired from two reference arms; the dotted line was obtained from a single arm
System 2 used a spectrometer based on a CCD camera with 4096 pixels per A-line (pixel size 10 μm; Aviiva-SM2-CL-4010; E2V Technologies, Elmsford, NY). The scan depth was 11.94 mm and the scan speed was 12,000 A-lines/s. Measured axial resolution was approximately 7.0 μm near the zero-delay line in air (Fig. 2b, red line). The sensitivity of the spectrometer was 111 dB near the zero delay line and had a 71 dB sensitivity drop at 11 mm (Fig. 3, red line).
System 3 used a spectrometer with a scan depth of 11.98 mm based on a CMOS camera that had a high scan speed of up to 70,000 A-lines/s (Basler Sprint spL4096-140 k; pixel size 10 μm; Basler Inc., Exton, PA). The axial resolution of the system near the zero-delay line was approximately 7.0 μm in air (Fig. 2b, green line). The sensitivity was 103 dB near the zero delay line and had a 63 dB sensitivity drop at 11 mm (Fig. 3, green line).
A special switchable reference arm was designed to acquire two images sequentially, similar to our previous study [20, 25] and others [16]. In this experiment, image overlapping was used for maximizing the SNR for the full image depth. This approach facilitates automatic registration and automatic boundary detection, which are currently under development. A galvanometer turned the light between the two mirrors mounted on the linear stages (M1 and M2 in Fig. 1) and was controlled by a square wave signal from the computer. Alterations between the two reference arms were synchronized with the scanning. The optical path difference (OPD) between the two arms determined the axial offset between the two frames, which was about 11 mm. The OPD was slightly adjusted with a linear stage so that the zero-delay lines of the two frames were placed on the top and bottom of the anterior segment for each individual [20, 25].
The sample arm was mounted on a modified slit-lamp microscope and used to adjust the image acquisition. An x-y galvanometer pair imaged the ocular anterior segment at the horizontal and the vertical meridians for alignment and acquisition using the custom acquisition software. To precisely align the scanning position, an X-Y cross aiming mode with 4 windows was used for live viewing. Two windows were used for viewing the images of the cornea and crystalline lens on the horizontal meridian and another two for viewing them on the vertical meridian. The operator monitored and adjusted the scanning position on both meridians in real time. Four images were acquired when the specular reflection was noted on both meridians, which ensured that the beam passed through the corneal apex. We used the cross-hair alignment live view to align the iris image on both horizontal and vertical scans so that the OCT beam was perpendicular to the iris plane (Fig. 1, insert). There is an angle between the visual axis and geometric axis of the eye known as the Kappa angle [30]. The OCT beam was aligned with the pupillary axis rather than the visual axis in the present study. In real-time, four images were quickly acquired, processed and displayed (Fig. 1). This real-time function avoided eye tilt and provided a better alignment of the eye during scanning. The focal plane of the beam was set at the anterior part of the crystalline lens by making on-axial adjustments of the objective lens (L2 in Fig. 1).
A liquid-crystal display (LCD) screen displaying a white Snellen letter "E" on a black background was set 10 cm from the tested eye. The target was controlled by a computer that altered the boundaries between a blurred or sharp picture. A trail lens (L4 in Fig. 1) in front of the LCD screen corrected for refractive error. The LCD and trail lens were combined and adjusted by a translation stage with a dual axis to make vertical and horizontal target adjustments.
Experimental procedure and image analysis
This protocol was approved by the institutional review board for human research at the University of Miami Informed consent was obtained from each subject, and all patients were treated in accordance with the tenets of the Declaration of Helsinki. An eye from a 47-year-old male subject was first imaged using system 3 to test the instrument with the switchable reference arm.
The exposure time of the CMOS camera was set to 77 μs, which corresponds to a scan rate of 10,000 A-scans/s. The measurement lasted approximately 200 ms per frame to acquire a single image consisting of 2048 A-scans. The subject sat in front of the slit-lamp and looked forward at the internal fixation target "E" with near equivalent spherical refractive correction. After adjusting fixation to ensure the existing of the corneal apex both in the horizontal and vertical meridian for perfect alignment, a 14 mm cross-sectional scan was obtained.
Figures 4a and b show two single frames obtained from a 47-year-old subject using system 3 under relaxed conditions. The zero-delay planes were set at the top (Fig. 4a) and bottom (Fig. 4b) of the images, and showed the cornea, iris and the anterior part of the crystalline lens. There were also dim images of the posterior (a) and the entire lens without the cornea (b) because the signal-to-noise ratio decreased as showed in Fig. 3. The two frames clearly showed the common portion of the iris and the anterior surface of the lens and were then manually overlapped with the registration of common features using imaging software (Adobe Photoshop CS, Vision 8.0, Adobe Systems Inc., San Jose, CA). The common portion including the iris and the anterior surface of the crystalline lens was used for registration and overlapping the two frames. The rotation and translation between the two frames were adjusted and corrected during overlapping. In the overlaid image, the entire anterior segment including the anterior and posterior surfaces of the crystalline lens was clearly visualized, as well as the cornea, anterior chamber and iris (Fig. 4c). In this study, we selected the method of image overlapping but did not crop the part of the image with low sensitivity as described elsewhere [16]. This approach was beneficial for image registration because the human eye may have slight movement during image acquisition, and the rotation/translation between the two images could be realized with image registration. The offset between the two zero-delay lines was set at approximately 11 mm. Therefore the low SNR part of one arm was compensated by the high SNR part of another arm. The drop-off of the sensitivity was compensated through the entire scan depth as demonstrated in Fig. 3. In the combined image, the drop-off was calculated as the difference between the highest (at one of the position near the zero-delay line) and lowest (at the middle of the scan depth) sensitivities. The drop-off of the combined system was 21 dB (system 1), 28 dB (system 2) and 24 dB (system 3).
The images of the entire anterior segment from a 47-year-old subject was obtained and processed. a: The image and the longitudinal reflectivity profiles obtained from reference arm 1; b: The image and the longitudinal reflectivity profiles obtained from reference arm 2; c: The combined image obtained from overlapping image a and b, and the longitudinal reflectivity profiles through the whole anterior segment. Bar = 1 mm
A custom-developed software produced the longitudinal reflectivity profiles during the first step of image analyses. Specular reflex on the corneal apex induces vertical hyper-reflective lines, interfering with image analysis [31]. The central 50 axial scans (approximately 0.36 mm width) were removed to avoid distortion of the central specular hyper-reflective reflex. The profiles of the 50 axial scans on either side of the anterior segment were also processed. The boundaries of the cornea and the lens were identified using the reflectivity profiles' peaks (Fig. 4c). The internal structure was identified by visualizing the cross-sectional images (Fig. 4c) for the purpose of demonstration. The central corneal thickness (CCT), anterior chamber depth (ACD) and central lens thickness (CLT) were also measured. Next, the boundaries of the cornea and the lens were outlined semi-manually using software specifically designed to construct the image. The custom-developed algorithm was used for each boundary correction and the refractive index of each medium (the refractive index of 1.387 for the cornea [32], 1.342 for the aqueous humor [33] and 1.408 for the crystalline lens [34] at 840 nm wavelength) was applied in this algorithm. Then, the curvature radii of the anterior and posterior surfaces of the cornea and lens were calculated. The algorithm for optical correction was validated in our previous study [25].
The three systems acquired the full range of the anterior segment in the left eye of a 26-year-old male subject. The refractive error in the tested eye was − 7.00DS/− 0.5 DC × 180. The images were obtained at both the horizontal and vertical meridian under relaxed and 4.00D accommodative states in a normal examination room and under dim light. The 2-dimensional cross sectional scans (B-scans) consisted of 2048 line scans (A-scans), using 2048 points per A-scan in system 1 or 4096 points in systems 2 and 3. To compare the three systems, the exposure time of each system was set at 4 times the initial value, which were 144 μs (systems 1 and 2) and 44 μs (system 3), which corresponds to the scan speeds of 6000 A-lines/s and 17,500 A-lines/s, respectively. It took approximately 333 ms per frame using systems 1 and 2, and approximately 114 ms using system 3.
The same subjects, a 26-year-old healthy subject and a 75-year-old patient with a monofocal intraocular lens (IOL, AcrySof SA60, Alcon) implanted, were dynamically imaged using the system 3 with the CMOS camera. In this case, the anterior segment length from the anterior surface of the cornea to the posterior surface of the IOL in the implanted patient was shorter than the phakic eye because the IOL was thin. Therefore, the distance between the two reference mirrors was decreased to place the zero-delay line of arm 2 near the posterior polar of the IOL. Thirty-one combined images with 1024 A-lines were continuously acquired for 3.72 s, with a single frame of 0.12 s and a frame rate of 8.3 frames per second. The OCT speed was 17,500 A-scan per second. The X-Y alignment was used but only horizontal images were obtained. The refractive correction during near vision was added to the trail lens. The target letter "E" was blurred at first to fog the eye and to relax the accommodation. The accommodative stimulus of 4.00D was set 1 s after scanning by altering the target from blurred to sharp. After outlining the peak intensity of the axial profile, as described above, the central corneal and crystalline lens/IOL thickness and anterior chamber depth were measured, and the results between the phakic eye and the IOL implanted eye were compared.
Figure 5 depicts the combined OCT images from the left eye of the young subject with different systems. The image from system 2 using a CCD with 4096 pixels (Fig. 5b) resulted in the best contrast among the three devices due to its high sensitivity. Even though the background noise in the CMOS image appeared higher than that with the other instruments, the contrast was almost equivalent to that obtained with system 2 (Fig. 5c). The central Bowman's layer in the magnified images was presented in systems 2 and 3 (Fig. 5b1 and c1), whereas the boundary of the corneal components in the image from system 1 was blurred (Fig. 5a1). Moreover, the boundaries of the Bowman's layer in system 1 was barely identified as the peaks in the reflectivity profiles but was easily distinguished in systems 2 and 3 (Fig. 5a4-a4, peak a and b) [35]. The entire anterior segment was successfully visualized using both systems and the boundaries of the cornea and lens were clearly distinguished. Not only were the axial lengths across the full-length ocular anterior segment but the radii of the curvature of the cornea and lens were similar among these three OCT systems (Fig. 6 and Table 2).
The uncorrected images taken from the entire anterior segment of a 26-year-old subject using the three systems. a: Image obtained by system 1 using a CCD camera with 2048 pixels; b: Image obtained by system 2 using a CCD camera with 4096 pixels; c: Image obtained by system 3 using a CMOS camera. a1-a3, b1-b3, c1-c3: The magnified images of the corneal apex (1), the anterior (2) and the posterior (3) of the lens surface using the three systems, respectively. a4, b4, c4: Longitudinal reflectivity profiles through the cornea. The boundaries of the Bowman's layer were identified as the peaks a and b. Bar = 500 μm
The longitudinal reflectivity profiles from a 26-year-old subject under the relaxed (a) and the accommodative (b) states. Blue line: Longitudinal profile obtained from system 1; Red line: Longitudinal profile obtained from system 2; Green line: Longitudinal profile obtained from system 3. The contrast scales were adjusted before obtaining the reflectivity profiles to demonstrate the peak locations representing the measured boundaries
Table 2 Anterior segment biometry obtained by the three devices under relaxed and accommodative states on the horizontal and the vertical meridian
As showed in Fig. 7, the IOL was clearly presented with overlapping images. Figure 8 showed the dynamic changes in the anterior segment of the phakic eye and the IOL implanted eye. The thickness of the cornea (Fig. 8a) did not change during accommodation. The decreased ACD (Fig. 8b, blue line) and increased CLT (Fig. 8c, blue line) were consistent with the sigmoidal function in the phakic eye. The ACD in the IOL implanted eye trended to decrease although the change was much smaller than that in the phakic eye (Fig. 8b, red line). The thickness of IOL remained unchanged during accommodation (Fig. 8c, red line).
The uncorrected image of the anterior segment presented from a 75-year-old IOL implanted eye. The cornea, anterior chamber, iris and the IOL are clearly presented. The image consists of 1024 A-lines of 4096 pixels each. Bar = 500 μm
The dynamic changes of the axial biometry of the anterior segment depicted for both a phakic eye and an IOL implanted eye. a: the dynamic changes in central corneal thickness; b: the dynamic changes in anterior chamber depth; c: the dynamic changes in central lens thickness. Blue line: phakic eye; Red line: IOL implanted eye. CCT, central corneal thickness; ACD, anterior chamber depth; CLT, central lens thickness
The SD-OCT provided high data acquisition speeds and high axial resolutions. However, the limitation in scan depth affected the imaging of the entire anterior segment. Removing the complex conjugate artifacts in SD-OCT permitted the acquisition of deeper imaging depths, using high-speed CMOS cameras to catch multiple images and to eliminate complex ambiguities [10, 14, 17, 19, 23]. However, when a single OCT channel was used, the technique reduced the speed of the image. This approach achieved an axial scan depth up to approximately 10 mm but did not image the accommodation in some highly myopic eyes. Previously, we developed a dual-channel dual-focus OCT for imaging accommodation [13]. The reflected light in the sample arm was attenuated by 50% for each channel, which decreased the signal-to-noise ratio [13, 23]. Additionally, the two-channel system imaged the posterior lens region and the region from the cornea to the anterior lens but failed to image the central crystalline lens area due to a gap between the two simultaneous OCT images. The high-speed reflective Fabry-Perot tunable lasers allowed the optical frequency domain imaging system (also called swept source OCT) to attain longer image depths of 12 mm, but the axial resolution (9–14 μm) was worse than in the SD-OCT [15, 17, 21, 22, 36]. In the previous study, we tested a spectrometer with a 12 mm scan depth that imaged the entire ocular anterior segment. The system demonstrated good repeatability for measuring the anterior segment and was an excellent tool for studying accommodation [25].
Sensitivity is an important aspect of the SD-OCT, which determines the contrast of the image and the maximum detected depth. The intensity of light reflected back from deeper tissue was extremely low because the biological tissue was not completely transparent. The signal intensity decreased as the imaged depth increased, indicating that the signal-to-noise ratio decreased as the position moved farther away from the zero-delay line [18, 37]. By altering the placement of the mirrors at the reference arm, the axial plane imaging range could be extended by stitching the two images together [16, 20, 25, 27]. Cropping the images for stitching, as demonstrated previously, may result in a sensitivity valley at the center of the image [16]. If the scan depth is long enough, image overlap may be beneficial for normalizing the SNR and for future image registration, as demonstrated previously [20, 25] and in the present study. Based on this approach, the automatic software, which was recently developed, could extract and trace the contour of the iris and the lens anterior surface for further image transformation (including rotation and translation) between the two images and then image overlapping.
Low resolution was a drawback of the original system, which was overcome using cameras with more camera pixels and a wider bandwidth projecting on the camera line. The theoretical axial resolution of SD-OCT increases at wider bandwidths and lower central wavelengths [38]. In the present study, the SLD had a central wavelength of 840 nm and a bandwidth of 50 nm; the axial resolution of the light source was theoretically calculated to be 6.3 μm. However, the spectral range of the line array camera limited the use of the available bandwidth of the SLD because the truncated spectrum had a configuration similar to that of the spectrometer. The measured axial resolution was worse than the theoretical value for a CCD with 2048 pixels. This phenomenon where there is a decreased resolution due to less active camera pixels has been described elsewhere [10, 39]. In the present study, the axial resolution of the two systems using 4096 pixels array cameras was similar, which was close to the theoretical values that resulted in the almost full projection of the bandwidth of the light source.
Image acquisition speed is another important factor in designing a long scan depth system for imaging accommodation. The acquisition time should be short in the OCT application because the accommodative process is highly dynamic. The CMOS camera with a high data transfer rate makes it possible to investigate the changing ocular anterior segment as a function of the response time during dynamic accommodation. Some researchers have determined that the accommodative response increases as a function of time and can be fitted to a sigmoidal curve [40, 41]. In the present study, the sigmoidal function of the time dependent changes in lens thickness and the anterior chamber depth were evident during accommodation. Interestingly, the anterior chamber depth in the IOL implanted eye decreased slightly in response to the accommodation stimulus, implying that the IOL experienced forward movement. The phenomenon has also been reported elsewhere; even the IOL was designed as a mono-focus [42, 43]. This finding indicates that the CMOS system, with its high speed, may be suitable for imaging the subtle changes of the accommodative biometry. On the other hand, as the most important component, the crystalline lens reshapes its surface in a complex form with tilting and/or decentration. Thus, three-dimensional scan patterns are required, which the OCT based on CMOS camera can perform [10]. In the present study, the light exposure time of the CMOS was set to 44 μs, indicating that an acquisition time for a single image of 0.12 s, is short enough to image the human eye in real time or in a three-dimensional pattern scan.
In the static accommodation, we tested the imaging of the entire segment using the three systems with a scan speed of 2.7 FPS (6000 A-scan per second) for the CCD systems and 8.3 FPS (17,500 A-scans per second) for the CMOS system. The integration times for all three systems needed to increase so that the scan speed could be decreased. This approach of increasing integration time (resulting in the reduction of the scan speed) has been used in many previous studies including ours [10, 25]. Our dynamic accommodation experiment demonstrated that the response of accommodation would be as quick as 0.5 s and the slow CCD system with 2.7 FPS may not be fast enough for capturing the start point of the accommodative response to the stimulus. Based on these experiments, we demonstrated the impact of the scan speed on the image quality and real-time data acquisition. We also demonstrated that the minimal integration time for the three systems for acquiring images with high quality in the static accommodation experiment. Taken together, the CMOS system would be recommended for imaging real-time accommodation, while all three systems can be used for imaging static accommodation.
This study describes the impact of enhanced axial resolution, speed and SNR on long scan depth SD-OCT, which images the entire ocular anterior segment in vivo during accommodation. We demonstrate the improved performance of the OCT system by enhancing the axial resolution with 4096 pixels camera and the scan speed by using the CMOS camera. All of the OCT systems tested with the SNR enhancement approach yielded similar biometric results in the model eye and the human eye, indicating that they may be used for imaging the static accommodation. For imaging real-time accommodation, the CMOS system may be recommended. In the future, the application of the SD-OCT systems with long scan depth, high resolution and high scan speed will be improved by implementing automatic image registration, segmentation and a 3-dimensional reconstruction in clinical applications.
ACD:
Anterior chamber depth
Charge Coupled Device
CCT:
Central corneal thickness
CLT:
Central lens thickness
CMOS:
Complementary Metal-Oxide-Semiconductor Transistor
IOL:
OPD:
Optical path difference
PSF:
Point spread function
SD-OCT:
Spectral domain OCT
SLD:
Superluminescent diode
SNR:
Signal-to-noise ratios
UBM:
Ultrasound biomicroscopy
Koretz JF, Cook CA, Kaufman PL. Accommodation and presbyopia in the human eye. Changes in the anterior segment and crystalline lens with focus. Invest Ophthalmol Vis Sci. 1997;38(3):569–78.
Glasser A, Kaufman PL. The mechanism of accommodation in primates. Ophthalmology. 1999;106(5):863–72.
von Helmholtz H. Uber die akkommodation des auges. Archiv Ophthalmol. 1855;1:1–74.
Dubbelman M, Van der Heijde GL, Weeber HA. Change in shape of the aging human crystalline lens with accommodation. Vision Res. 2005;45(1):117–32.
Hermans EA, Pouwels PJ, Dubbelman M, Kuijer JP, van der Heijde RG, Heethaar RM. Constant volume of the human lens and decrease in surface area of the capsular bag during accommodation: an MRI and Scheimpflug study. Invest Ophthalmol Vis Sci. 2009;50(1):281–9.
Rosales P, Dubbelman M, Marcos S, van der Heijde R. Crystalline lens radii of curvature from Purkinje and Scheimpflug imaging. J Vis. 2006;6(10):1057–67.
Richdale K, Bullimore MA, Zadnik K. Lens thickness with age and accommodation by optical coherence tomography. Ophthalmic Physiol Opt. 2008;28(5):441–7.
Liebmann JM, Ritch R. Ultrasound biomicroscopy of the anterior segment. J Am Optom Assoc. 1996;67(8):469–79.
Atchison DA, Markwell EL, Kasthurirangan S, Pope JM, Smith G, Swann PG. Age-related changes in optical and biometric characteristics of emmetropic eyes. J Vis. 2008;8(4):29.1–20.
Grulkowski I, Gora M, Szkulmowski M, Gorczynska I, Szlag D, Marcos S, et al. Anterior segment imaging with Spectral OCT system using a high-speed CMOS camera. Opt Express. 2009;17(6):4842–58.
Yan PS, Lin HT, Wang QL, Zhang ZP. Anterior segment variations with age and accommodation demonstrated by slit-lamp-adapted optical coherence tomography. Ophthalmology. 2010;117(12):2301–7.
Zhu D, Shen M, Jiang H, Li M, Wang MR, Wang Y, et al. Broadband superluminescent diode-based ultrahigh resolution optical coherence tomography for ophthalmic imaging. J Biomed Opt. 2011;16(12):126006.
Zhou C, Wang J, Jiao S. Dual channel dual focus optical coherence tomography for imaging accommodation of the eye. Opt Express. 2009;17(11):8947–55.
Jungwirth J, Baumann B, Pircher M, Götzinger E, Hitzenberger CK. Extended in vivo anterior eye-segment imaging with full-range complex spectral domain optical coherence tomography. J Biomed Opt. 2009;14(5):050501.
Furukawa H, Hiro-Oka H, Satoh N, Yoshimura R, Choi D, Nakanishi M, et al. Full-range imaging of eye accommodation by high-speed long-depth range optical frequency domain imaging. Biomed Opt Express. 2010;1(5):1491–501.
Ruggeri M, Uhlhorn SR, De Freitas C, Ho A, Manns F, Parel JM. Imaging and full-length biometry of the eye during accommodation using spectral domain OCT with an optical switch. Biomed Opt Express. 2012;3(7):1506–20.
Sarunic MV, Asrani S, Izatt JA. Imaging the ocular anterior segment with real-time, full-range Fourier-domain optical coherence tomography. Arch Ophthalmol. 2008;126(4):537–42.
Wojtkowski M, Leitgeb R, Kowalczyk A, Bajraszewski T, Fercher AF. In vivo human retinal imaging by Fourier domain optical coherence tomography. J Biomed Opt. 2002;7(3):457–63.
Kerbage C, Lim H, Sun W, Mujat M, de Boer JF. Large depth-high resolution full 3D imaging of the anterior segments of the eye using high speed optical frequency domain imaging. Opt Express. 2007;15(12):7117–25.
Du C, Zhu D, Shen M, Li M, Wang MR, Wang J. Novel optical coherence tomography for imaging the entire anterior segment of the eye. Invest Ophthalmol Vis Sci; ARVO E-Abstract. 2011;52:3023.
Grulkowski I, Liu JJ, Potsaid B, Jayaraman V, Lu CD, Jiang J, et al. Retinal, anterior segment and full eye imaging using ultrahigh speed swept source OCT with vertical-cavity surface emitting lasers. Biomed Opt Express. 2012;3(11):2733–51.
Gora M, Karnowski K, Szkulmowski M, Kaluzny BJ, Huber R, Kowalczyk A, et al. Ultra high-speed swept source OCT imaging of the anterior segment of human eye at 200 kHz with adjustable imaging range. Opt Express. 2009;17(17):14880–94.
Dai C, Zhou C, Fan S, Chen Z, Chai X, Ren Q, et al. Optical coherence tomography for whole eye segment imaging. Opt Express. 2012;20(6):6109–15.
Yuan Y, Chen F, Shen M, Lu F, Wang J. Repeated measurements of the anterior segment during accommodation using long scan depth optical coherence tomography. Eye Contact Lens. 2012;38(2):102–8.
Du C, Shen M, Li M, Zhu D, Wang MR, Wang J. Anterior segment biometry during accommodation imaged with ultralong scan depth optical coherence tomography. Ophthalmology. 2012;119(12):2479–85.
Shen M, Cui L, Li M, Zhu D, Wang MR, Wang J. Extended scan depth optical coherence tomography for evaluating ocular surface shape. J Biomed Opt. 2011;16(5):056007.
Wang H, Pan Y, Rollins AM. Extending the effective imaging range of Fourier-domain optical coherence tomography using a fiber optic switch. Opt Lett. 2008;33(22):2632–4.
Helmers H, Schellenberg M. CMOS vs. CCD sensors in speckle interferometry. Optics & Laser Technology. 2003;35(8):587–95.
Leitgeb R, Drexler W, Unterhuber A, Hermann B, Bajraszewski T, Le T, et al. Ultrahigh resolution Fourier domain optical coherence tomography. Opt Express. 2004;12(10):2156–65.
Moshirfar M, Hoggan RN, Muthappan V. Angle Kappa and its importance in refractive surgery. Oman J Ophthalmol. 2013;6(3):151–8.
Zhong J, Shao Y, Tao A, Jiang H, Liu C, Zhang H, et al. Axial biometry of the entire eye using ultra-long scan depth optical coherence tomography. Am J Ophthalmol. 2014;157(2):412–20.
Uhlhorn SR, Manns F, Tahi H, Rol PO, Parel JM. Corneal group refractive index measurement using low-coherence interferometry. Proc. SPIE. 1998;3246:14–21.
Atchison DA, Smith G. Chromatic dispersion of the ocular media of human eyes. J Opt Soc Am A Opt Image Sci Vis. 2005;22(1):29–37.
Uhlhorn SR, Borja D, Manns F, Parel JM. Refractive index measurement of the isolated crystalline lens using optical coherence tomography. Vision Res. 2008;48(27):2732–8.
Tao A, Wang J, Chen Q, Shen M, Lu F, Dubovy SR, et al. Topographic thickness of Bowman's layer determined by ultra-high resolution spectral domain-optical coherence tomography. Invest Ophthalmol Vis Sci. 2011;52(6):3901–7.
Potsaid B, Baumann B, Huang D, Barry S, Cable AE, Schuman JS, et al. Ultrahigh speed 1050nm swept source/Fourier domain OCT retinal and anterior segment imaging at 100,000 to 400,000 axial scans per second. Opt Express. 2010;18(19):20029–48.
Wojtkowski M, Kowalczyk A, Leitgeb R, Fercher AF. Full range complex spectral optical coherence tomography technique in eye imaging. Opt Lett. 2002;27(16):1415–7.
Wojtkowski M, Srinivasan V, Ko T, Fujimoto J, Kowalczyk A, Duker J. Ultrahigh-resolution, high-speed, Fourier domain optical coherence tomography and methods for dispersion compensation. Opt Express. 2004;12(11):2404–22.
Povazay B, Hofer B, Torti C, Hermann B, Tumlinson AR, Esmaeelpour M, et al. Impact of enhanced resolution, speed and penetration on three-dimensional retinal optical coherence tomography. Opt Express. 2009;17(5):4134–50.
Lockhart TE, Shi W. Effects of age on dynamic accommodation. Ergonomics. 2010;53(7):892–903.
Sun FC, Stark L, Nguyen A, Wong J, Lakshminarayanan V, Mueller E. Changes in accommodation with age: static and dynamic. Am J Optom Physiol Optic. 1988;65(6):492–8.
Marchini G, Pedrotti E, Modesti M, Visentin S, Tosi R. Anterior segment changes during accommodation in eyes with a monofocal intraocular lens: high-frequency ultrasound study. J Cataract Refract Surg. 2008;34(6):949–56.
Marchini G, Mora P, Pedrotti E, Manzotti F, Aldigeri R, Gandolfi SA. Functional assessment of two different accommodative intraocular lenses compared with a monofocal intraocular lens. Ophthalmology. 2007;114(11):2038–43.
This study was supported by research grants from the NIH 1R21EY021336, NIH Center Grant P30 EY014801 and Research to Prevent Blindness (RPB), Department of Defense (DOD- Grant #: W81XWH-09-1-0675).
Bascom Palmer Eye Institute, University of Miami, Miami, FL, USA
Yilei Shao
, Aizhu Tao
, Hong Jiang
, Carol L. Karp
, Yufeng Ye
& Jianhua Wang
School of Ophthalmology and Optometry, Wenzhou Medical University, Wenzhou, Zhejiang, China
, Meixiao Shen
, Dexi Zhu
& Fan Lu
Hangzhou First People's Hospital, Hangzhou, China
Yufeng Ye
Electrical and Computer Engineering, University of Miami, Miami, FL, USA
Jianhua Wang
Bascom Palmer Eye Institute, University of Miami, Miller School of Medicine, 1638 NW 10th Avenue, McKnight Building - Room 202A, Miami, FL, 33136, USA
Search for Yilei Shao in:
Search for Aizhu Tao in:
Search for Hong Jiang in:
Search for Meixiao Shen in:
Search for Dexi Zhu in:
Search for Fan Lu in:
Search for Carol L. Karp in:
Search for Yufeng Ye in:
Search for Jianhua Wang in:
YS, AZ, and JW made substantial contributions to conception and design. YS performed the acquisition, analysis and interpretation of data, and was a major contributor in drafting the manuscript. MS and DZ was involved analysis and interpretation of data; YS, AZ, HJ, YY, FL, CK and JW revised the manuscript critically for important intellectual content; All authors read and approved the final manuscript.
Correspondence to Jianhua Wang.
This study was approved by the institutional review board for human research at the University of Miami. Informed consent was obtained from each subject and all patients were treated in accordance with the tenets of the Declaration of Helsinki.
Informed consent was obtained from each subject.
Shao, Y., Tao, A., Jiang, H. et al. Long scan depth optical coherence tomography on imaging accommodation: impact of enhanced axial resolution, signal-to-noise ratio and speed. Eye and Vis 5, 16 (2018). https://doi.org/10.1186/s40662-018-0111-4
Axial resolution
Clinical Applications of Optical Coherence Tomography
|
CommonCrawl
|
February 2002 , Volume 2 , Issue 1
Convergence of a boundary integral method for 3-D water waves
Thomas Y. Hou and Pingwen Zhang
2002, 2(1): 1-34 doi: 10.3934/dcdsb.2002.2.1 +[Abstract](2598) +[PDF](306.2KB)
We prove convergence of a modified point vortex method for time-dependent water waves in a three-dimensional, inviscid, irrotational and incompressible fluid. Our stability analysis has two important ingredients. First we derive a leading order approximation of the singular velocity integral. This leading order approximation captures all the leading order contributions of the original velocity integral to linear stability. Moreover, the leading order approximation can be expressed in terms of the Riesz transform, and can be approximated with spectral accuracy. Using this leading order approximation, we construct a near field correction to stabilize the point vortex method approximation. With the near field correction, our modified point vortex method is linearly stable and preserves all the spectral properties of the continuous velocity integral to the leading order. Nonlinear stability and convergence with 3rd order accuracy are obtained using Strang's technique by establishing an error expansion in the consistency error.
Thomas Y. Hou, Pingwen Zhang. Convergence of a boundary integral method for 3-D water waves. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 1-34. doi: 10.3934/dcdsb.2002.2.1.
Simulation of stationary chemical patterns and waves in ionic reactions
Arno F. Münster
2002, 2(1): 35-46 doi: 10.3934/dcdsb.2002.2.35 +[Abstract](2680) +[PDF](252.3KB)
In numerical simulations based on a general model chemical patterns in ionic reaction-advection systems assuming a "self-consistent" electric field are presented. Chemical waves as well as stationary concentration patterns arise due to an interplay of an autocatalytic chemical reaction with diffusion, migration of ions in an applied electric field and hydrodynamic flow. Concentration gradients inside the chemical pattern lead to electric diffusion-potentials which in turn affect the patterns. Thus,the model equations take the general form of the Fokker-Planck equation. The principles of modeling a ionic reaction-diffusion-migration system are applied to a real chemical system, the nonlinear methylene blue-sulfide-oxygen reaction.
Arno F. M\u00FCnster. Simulation of stationary chemical patterns and waves in ionic reactions. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 35-46. doi: 10.3934/dcdsb.2002.2.35.
Asymptotic behavior of solutions of time-delayed Burgers' equation
Weijiu Liu
In this paper, we consider Burgers' equation with a time delay. By using the Liapunov function method, we show that the delayed Burgers' equation is exponentially stable if the delay parameter is sufficiently small. We also give an explicit estimate of the delay parameter in terms of the viscosity and initial conditions, which indicates that the delay parameter tends to zero if the initial states tend to infinity or the viscosity tends to zero. Furthermore, we present numerical simulations for our theoretical results.
Weijiu Liu. Asymptotic behavior of solutions of time-delayed Burgers\' equation. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 47-56. doi: 10.3934/dcdsb.2002.2.47.
Lyapunov-based transfer between elliptic Keplerian orbits
Dong Eui Chang, David F. Chichka and Jerrold E. Marsden
We present a study of the transfer of satellites between elliptic Keplerian orbits using Lyapunov stability theory specific to this problem. The construction of Lyapunov functions is based on the fact that a non-degenerate Keplerian orbit is uniquely described by its angular momentum and Laplace (- Runge-Lenz) vectors. We suggest a Lyapunov function, which gives a feedback controller such that the target elliptic orbit becomes a locally asymptotically stable periodic orbit in the closed-loop dynamics. We show how to perform a global transfer between two arbitrary elliptic orbits based on the local transfer result. Finally, a second Lyapunov function is presented that works only for circular target orbits.
Dong Eui Chang, David F. Chichka, Jerrold E. Marsden. Lyapunov-based transfer between elliptic Keplerian orbits. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 57-67. doi: 10.3934/dcdsb.2002.2.57.
Transmission boundary conditions in a model-kinetic decomposition
C. Bourdarias, M. Gisclon and A. Omrane
This paper deals with the fluid limit using the Perthame-Tadmor model with initial and boundary conditions of transmission type within two positive parameters $\varepsilon_1$ and $\varepsilon_2$ for the kinetic dynamical problem. We show that the kinetic problem is well posed in $L^\infty \bigcap L^1(0,T;L^1(\mathbb{R}^n \times \mathbb{R}_v))$. We also prove a BV estimate which allows us to pass to the limit in each kinetic region or, under restrictive conditions, in a single region. This result can be applied to scalar conservation laws with decomposition domain.
C. Bourdarias, M. Gisclon, A. Omrane. Transmission boundary conditions in a model-kinetic decomposition. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 69-94. doi: 10.3934/dcdsb.2002.2.69.
Finite element analysis and approximations of phase-lock equations of superconductivity
Mei-Qin Zhan
2002, 2(1): 95-108 doi: 10.3934/dcdsb.2002.2.95 +[Abstract](2287) +[PDF](211.9KB)
In [22], the author introduced the phase-lock equations and established existences of both strong and weak solutions of the equations. We also investigated the relations between phase-lock equations and Ginzburg-Landau equations of Superconductivity. In this paper, we present finite element analysis and computations of phase-lock equations. We derive the error estimates for both semi-discrete and fully discrete equations, including optimal $L^2$ and $H^1$ error estimates. In the fully discrete case, we use backward Euler method to discretize the time variable.
Mei-Qin Zhan. Finite element analysis and approximations of phase-lock equations of superconductivity. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 95-108. doi: 10.3934/dcdsb.2002.2.95.
The nonlinear Schrödinger equation as a resonant normal form
Dario Bambusi, A. Carati and A. Ponno
Averaging theory is used to study the dynamics of dispersive equations taking the nonlinear Klein Gordon equation on the line as a model problem: For approximatively monochromatic initial data of amplitude $\epsilon$, we show that the corresponding solution consists of two non interacting wave packets, each one being described by a nonlinear Schrödinger equation. Such solutions are also proved to be stable over times of order $1/ \epsilon^2$. We think that this approach puts into a new light the problem of obtaining modulations equations for general dispersive equations. The proof of our results requires a new use of normal forms as a tool for constructing approximate solutions.
Dario Bambusi, A. Carati, A. Ponno. The nonlinear Schr\u00F6dinger equation as a resonant normal form. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 109-128. doi: 10.3934/dcdsb.2002.2.109.
Identification of modulated rotating waves in pattern-forming systems with O(2) symmetry
A. Palacios
A numerical algorithm for identifying Modulated Rotating Waves in spatially extended systems with O(2) symmetry—the symmetry group of rotations and reflections on the plane, is presented. The algorithm can be applied to numerical simulations of Partial Differential Equations (PDEs) and experimental data obtained in a laboratory. The basic methodology is illustrated with various cellular patterns obtained from video images of a combustion experiment carried out on a circular burner. Rotating waves and modulated rotating waves are successfully identified in the experiment. The algorithm is then validated by comparing the analysis of experimental patterns with the analysis of computational patterns obtained from numerical simulations of a reaction-diffusion PDE model.
A. Palacios. Identification of modulated rotating waves in pattern-forming systems with O(2) symmetry. Discrete & Continuous Dynamical Systems - B, 2002, 2(1): 129-147. doi: 10.3934/dcdsb.2002.2.129.
|
CommonCrawl
|
A robustly transitive diffeomorphism of Kan's type
A Liouville-type theorem for cooperative parabolic systems
February 2018, 38(2): 835-866. doi: 10.3934/dcds.2018036
Reversing and extended symmetries of shift spaces
Michael Baake 1, , John A. G. Roberts 2, and Reem Yassawi 3,
Faculty of Mathematics, Universität Bielefeld, Box 100131,33501 Bielefeld, Germany
School of Mathematics and Statistics, UNSW, Sydney, NSW 2052, Australia
IRIF, Université Paris-Diderot — Paris 7, Case 7014,75205 Paris Cedex 13, France
Received January 2017 Revised September 2017 Published February 2018
Full Text(HTML)
Figure(3)
The reversing symmetry group is considered in the setting of symbolic dynamics. While this group is generally too big to be analysed in detail, there are interesting cases with some form of rigidity where one can determine all symmetries and reversing symmetries explicitly. They include Sturmian shifts as well as classic examples such as the Thue–Morse system with various generalisations or the Rudin–Shapiro system. We also look at generalisations of the reversing symmetry group to higher-dimensional shift spaces, then called the group of extended symmetries. We develop their basic theory for faithful $\mathbb{Z}^{d}$-actions, and determine the extended symmetry group of the chair tiling shift, which can be described as a model set, and of Ledrappier's shift, which is an example of algebraic origin.
Keywords: Symbolic dynamics, substitutions, shift spaces, tiling dynamics, automorphism and symmetry groups, reversibility, extended symmetries.
Mathematics Subject Classification: 37B10, 37B50, 52C23, 20B27, 20H05.
Citation: Michael Baake, John A. G. Roberts, Reem Yassawi. Reversing and extended symmetries of shift spaces. Discrete & Continuous Dynamical Systems, 2018, 38 (2) : 835-866. doi: 10.3934/dcds.2018036
J. -P. Allouche and J. Shallit, Automatic Sequences Cambridge University Press, Cambridge, 2003. doi: 10.1017/CBO9780511546563. Google Scholar
L. Arenas-Carmona, D. Berend and V. Bergelson, Ledrappier's system is almost mixing of all orders, Ergodic Th. & Dynam. Syst., 28 (2008), 339-365. doi: 10.1017/S0143385707000727. Google Scholar
J. Auslander, Endomorphisms of minimal sets, Duke Math. J., 30 (1963), 605-614. doi: 10.1215/S0012-7094-63-03065-5. Google Scholar
M. Baake, Structure and representations of the hyperoctahedral group, J. Math. Phys., 25 (1984), 3171-3182. doi: 10.1063/1.526087. Google Scholar
M. Baake, F. Gähler and U. Grimm, Spectral and topological properties of a family of generalised Thue–Morse sequences, J. Math. Phys., 53 (2012), 032701, 24pp. doi: 10.1063/1.3688337. Google Scholar
M. Baake and U. Grimm, Aperiodic Order. Vol. 1: A Mathematical Invitation Cambridge Univ. Press, Cambridge, 2013. doi: 10.1017/CBO9781139025256. Google Scholar
M. Baake, J. Hermisson and P. A. B. Pleasants, The torus parametrization of quasiperiodic LI classes, J. Phys. A: Math. Gen., 30 (1997), 3029-3056. doi: 10.1088/0305-4470/30/9/016. Google Scholar
M. Baake, R. V. Moody and P. A. B. Pleasants, Diffraction from visible lattice points and $k$-th power free integers, Discr. Math., 221 (2000), 3-42. doi: 10.1016/S0012-365X(99)00384-2. Google Scholar
M. Baake and J. A. G. Roberts, Symmetries and reversing symmetries of polynomial automorphisms of the plane, Nonlinearity, 18 (2005), 791-816. doi: 10.1088/0951-7715/18/2/017. Google Scholar
M. Baake and J. A. G. Roberts, Symmetries and reversing symmetries of toral automorphisms, Nonlinearity, 14 (2001), R1-R24. doi: 10.1088/0951-7715/14/4/201. Google Scholar
M. Baake and J. A. G. Roberts. The structure of reversing symmetry groups, The structure of reversing symmetry groups, Bull. Austral. Math. Soc., 73 (2006), 445-459. doi: 10.1017/S0004972700035450. Google Scholar
M. Baake and T. Ward, Planar dynamical systems with pure Lebesgue diffraction spectrum, J. Stat. Phys., 140 (2010), 90-102. doi: 10.1007/s10955-010-9984-x. Google Scholar
J. Berstel, L. Boasson, O. Carton and I. Fagnot, Infinite words without palindrome, preprint, arXiv: 0903.2382. Google Scholar
S. Bhattacharya and K. Schmidt, Homoclinic points and isomorphism rigidity of algebraic $\mathbb{Z}^{d}$-actions on zero-dimensional compact Abelian groups, Israel J. Math., 137 (2003), 189-209. doi: 10.1007/BF02785962. Google Scholar
S. Bhattacharya and T. Ward, Finite entropy characterizes topological rigidity on connected groups, Ergodic Th. & Dynam. Syst., 25 (2005), 365-373. doi: 10.1017/S0143385704000501. Google Scholar
W. Bulatek and J. Kwiatkowski, Strictly ergodic Toeplitz flows with positive entropy and trivial centralizers, Studia Math., 103 (1992), 133-142. doi: 10.4064/sm-103-2-133-142. Google Scholar
F. Cellarosi and Ya. G. Sinai, Ergodic properties of square-free numbers, J. Europ. Math. Soc., 15 (2013), 1343-1374. doi: 10.4171/JEMS/394. Google Scholar
E. M. Coven, Endomorphisms of substitution minimal sets, Z. Wahrscheinlichkeitsth. verw. Geb., 20 (1971/1972), 129-133. doi: 10.1007/BF00536290. Google Scholar
E. M. Coven and G. A. Hedlund, Sequences with minimal block growth, Math. Systems Theory, 7 (1973), 138-153. doi: 10.1007/BF01762232. Google Scholar
E. M. Coven, A. Quas and R. Yassawi, Computing automorphism groups of shifts using atypical equivalence classes Discrete Analysis 2016 (2016), 28pp. doi: 10.19086/da.611. Google Scholar
V. Cyr and B. Kra, The automorphism group of a shift of subquadratic growth, Proc. Amer. Math. Soc., 2 (2016), 613-621. doi: 10.1090/proc12719. Google Scholar
V. Cyr and B. Kra, The automorphism group of a shift of linear growth: Beyond transitivity Forum Math. Sigma 3 (2015), e5, 27pp. doi: 10.1017/fms.2015.3. Google Scholar
V. Cyr and B. Kra, The automorphism group of a minimal shift of stretched exponential growth, J. Mod. Dyn., 10 (2016), 483-495. doi: 10.3934/jmd.2016.10.483. Google Scholar
M. Dekking, The spectrum of dynamical systems arising from substitutions of constant length, Z. Wahrscheinlichkeitsth. verw. Geb., 41 (1978), 221-239. doi: 10.1007/BF00534241. Google Scholar
S. Donoso, F. Durand, A. Maass and S. Petite, On automorphism groups of low complexity shifts, Ergodic Th. & Dynam. Syst., 36 (2016), 64-95. doi: 10.1017/etds.2015.70. Google Scholar
X. Droubay and G. Pirillo, Palindromes and Sturmian words, Theor. Comput. Sci., 223 (1999), 73-85. doi: 10.1016/S0304-3975(97)00188-6. Google Scholar
F. Durand, A characterization of substitutive sequences using return words, Discr. Math., 179 (1998), 89-101. doi: 10.1016/S0012-365X(97)00029-0. Google Scholar
M. Einsiedler and T. Ward, Ergodic Theory with a View towards Number Theory Springer, London, 2011. doi: 10.1007/978-0-85729-021-2. Google Scholar
E. H. El Abdalaoui, M. Lemańczyk and T. de la Rue, A dynamical point of view on the set of $\mathcal{B}$-free integers, Int. Math. Res. Notices, 2015 (2015), 7258-7286. doi: 10.1093/imrn/rnu164. Google Scholar
T. Giordano, I. F. Putnam and C. Skau, Full groups of Cantor minimal systems, Israel J. Math., 111 (1999), 285-320. doi: 10.1007/BF02810689. Google Scholar
M. Golubitsky and I. Stewart, The Symmetry Perspective — From Equilibrium to Chaos in Phase Space and Physical Space, Birkhäuser, Basel, 2002. doi: 10.1007/978-3-0348-8167-8. Google Scholar
G. R. Goodson, Inverse conjugacies and reversing symmetry groups, Amer. Math. Monthly, 106 (1999), 19-26. doi: 10.2307/2589582. Google Scholar
G. Goodson, A. del Junco, M. Lemańczyk and D. Rudolph, Ergodic transformation conjugate to their inverses by involutions, Ergodic Th. & Dynam. Syst., 16 (1996), 97-124. doi: 10.1017/S0143385700008737. Google Scholar
G. A. Hedlund, Endomorphisms and automorphisms of the shift dynamical systems, Math. Systems Th., 3 (1969), 320-375. doi: 10.1007/BF01691062. Google Scholar
M. Hochman, Genericity in topological dynamics, Ergodic Th. & Dynam. Syst., 28 (2008), 125-165. doi: 10.1017/S0143385707000521. Google Scholar
A. Hof, O. Knill and B. Simon, Singular continuous spectrum for palindromic Schrödinger operators, Commun. Math. Phys., 174 (1995), 149-159. doi: 10.1007/BF02099468. Google Scholar
K. Juschenko and N. Monod, Cantor systems, piecewise translations and simple amenable groups, Ann. of Math., 178 (2013), 775-787. doi: 10.4007/annals.2013.178.2.7. Google Scholar
M. Keane, Generalized Morse sequences, Z. Wahrscheinlichkeitsth. verw. Geb., 10 (1968), 335-353. doi: 10.1007/BF00531855. Google Scholar
Y.-O. Kim, J. Lee and K. K. Park, A zeta function for flip systems, Pacific J. Math., 209 (2003), 289-301. doi: 10.2140/pjm.2003.209.289. Google Scholar
B. P. Kitchens, Symbolic Dynamics Springer, Berlin, 1998. doi: 10.1007/978-3-642-58822-8. Google Scholar
B. Kitchens, Dynamics of Zd actions on Markov subgroups, in Topics in Symbolic Dynamics and Applications, F. Blanchard, A. Maas and A. Nogueira (eds. ), Cambridge University Press, Cambridge, (2000), pp. 89–122. Google Scholar
B. Kitchens and K. Schmidt, Isomorphism rigidity of irreducible algebraic $\mathbb{Z}^{d}$-actions, Invent. Math., 142 (2000), 559-577. doi: 10.1007/PL00005793. Google Scholar
J. S. W. Lamb, Reversing symmetries in dynamical systems, J. Phys. A: Math. Gen., 25 (1992), 925-937. doi: 10.1088/0305-4470/25/4/028. Google Scholar
J. S. W. Lamb and J. A. G. Roberts, Time-reversal symmetry in dynamical systems: A survey, Physica D, 112 (1998), 1-39. doi: 10.1016/S0167-2789(97)00199-1. Google Scholar
F. Ledrappier, Un champ markovien peut être d'entropie nulle et mélangeant, C. R. Acad. Sci. Paris Sér. A-B, 287 (1978), A561-A563. Google Scholar
J. Lee, K. K. Park and S. Shin, Reversible topological Markov shifts, Ergodic Th. & Dynam. Syst., 26 (2006), 267-280. doi: 10.1017/S0143385705000556. Google Scholar
D. A. Lind and B. Marcus, An Introduction to Symbolic Dynamics and Coding Cambridge University Press, Cambridge, 1995. doi: 10.1017/CBO9780511626302. Google Scholar
M. K. Mentzen, Automorphisms of subshifts defined by $\mathcal{B}$-free sets of integers, Coll. Math., 147 (2017), 87-94. doi: 10.4064/cm6927-5-2016. Google Scholar
M. Morse and G. A. Hedlund, Symbolic dynamics Ⅱ. Sturmian trajectories, Amer. J. Math., 62 (1940), 1-42. doi: 10.2307/2371431. Google Scholar
A. G. O'Farrel and I. Short, Reversibility in Dynamics and Group Theory Cambridge University Press, Cambridge, 2015. doi: 10.1017/CBO9781139998321. Google Scholar
J. Olli, Endomorphisms of Sturmian systems and the discrete chair substitution tiling system, Discr. Cont. Dynam. Syst. A, 33 (2013), 4173-4186. doi: 10.3934/dcds.2013.33.4173. Google Scholar
K. Petersen, Ergodic Theory Cambridge University Press, Cambridge, 1983. doi: 10.1017/CBO9780511608728. Google Scholar
M. Queffélec, Substitution Dynamical Systems — Spectral Analysis, LNM 1294, 2nd ed., Springer, Berlin, 2010. Google Scholar
J. A. G. Roberts and M. Baake, Trace maps as 3D reversible dynamical systems with an invariant, J. Stat. Phys., 74 (1994), 829-888. doi: 10.1007/BF02188581. Google Scholar
E. A. Robinson, On the table and the chair, Indag. Math., 10 (1999), 581-599. doi: 10.1016/S0019-3577(00)87911-2. Google Scholar
K. Schmidt, Dynamical Systems of Algebraic Origin Birkhäuser, Basel, 1995. Google Scholar
R. L. E. Schwarzenberger, $N$-dimensional Crystallography Pitman, San Francisco, 1980. Google Scholar
M. B. Sevryuk, Reversible Systems LNM 1211, Springer, Berlin, 1986. doi: 10.1007/BFb0075877. Google Scholar
B. Tan, Mirror substitutions and palindromic sequences, Theor. Comput. Sci., 389 (2007), 118-124. doi: 10.1016/j.tcs.2007.08.003. Google Scholar
Ya. Vorobets, On a substitution shift related to the Grigorchuk group, Proc. Steklov Inst. Math., 271 (2010), 306-321. doi: 10.1134/S0081543810040218. Google Scholar
Figure 1. The chair inflation rule (upper left panel; rotated tiles are inflated to rotated patches), a legal patch with full $D_{4}$ symmetry (lower left) and a level-$3$ inflation patch generated from this legal seed (shaded; right panel). Note that this patch still has the full $D_{4}$ point symmetry (with respect to its centre), as will the infinite inflation tiling fixed point emerging from it
Figure Options
Download as PowerPoint slide
Figure 1, now turned into a patch of its symbolic representation via the recoding of Eq. (15). The relation between the purely geometric point symmetries in the tiling picture and the corresponding combinations of point symmetries and LEMs can be seen from this seed">Figure 2. The chair tiling seed of Figure 1, now turned into a patch of its symbolic representation via the recoding of Eq. (15). The relation between the purely geometric point symmetries in the tiling picture and the corresponding combinations of point symmetries and LEMs can be seen from this seed
Figure 3. Illustration of the central configurational patch for Ledrappier's shift condition, which explains the relevance of the triangular lattice. Eq. (16) must be satisfied for the three vertices of all elementary $L$-triangles (shaded). The overall pattern of these triangles is preserved by all (extended) symmetries. The group $D^{}_{3}$ from Theorem 7 can now be viewed as the colour-preserving symmetry group of the 'distorted' hexagon as indicated around the origin
Steven T. Piantadosi. Symbolic dynamics on free groups. Discrete & Continuous Dynamical Systems, 2008, 20 (3) : 725-738. doi: 10.3934/dcds.2008.20.725
Álvaro Bustos. Extended symmetry groups of multidimensional subshifts with hierarchical structure. Discrete & Continuous Dynamical Systems, 2020, 40 (10) : 5869-5895. doi: 10.3934/dcds.2020250
François Gay-Balmaz, Cesare Tronci, Cornelia Vizman. Geometric dynamics on the automorphism group of principal bundles: Geodesic flows, dual pairs and chromomorphism groups. Journal of Geometric Mechanics, 2013, 5 (1) : 39-84. doi: 10.3934/jgm.2013.5.39
Luis Barreira, Claudia Valls. Reversibility and equivariance in center manifolds of nonautonomous dynamics. Discrete & Continuous Dynamical Systems, 2007, 18 (4) : 677-699. doi: 10.3934/dcds.2007.18.677
Van Cyr, John Franks, Bryna Kra, Samuel Petite. Distortion and the automorphism group of a shift. Journal of Modern Dynamics, 2018, 13: 147-161. doi: 10.3934/jmd.2018015
James Kingsbery, Alex Levin, Anatoly Preygel, Cesar E. Silva. Dynamics of the $p$-adic shift and applications. Discrete & Continuous Dynamical Systems, 2011, 30 (1) : 209-218. doi: 10.3934/dcds.2011.30.209
Jim Wiseman. Symbolic dynamics from signed matrices. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 621-638. doi: 10.3934/dcds.2004.11.621
George Osipenko, Stephen Campbell. Applied symbolic dynamics: attractors and filtrations. Discrete & Continuous Dynamical Systems, 1999, 5 (1) : 43-60. doi: 10.3934/dcds.1999.5.43
Michael Hochman. A note on universality in multidimensional symbolic dynamics. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 301-314. doi: 10.3934/dcdss.2009.2.301
Jose S. Cánovas, Tönu Puu, Manuel Ruiz Marín. Detecting chaos in a duopoly model via symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2010, 13 (2) : 269-278. doi: 10.3934/dcdsb.2010.13.269
Nicola Soave, Susanna Terracini. Symbolic dynamics for the $N$-centre problem at negative energies. Discrete & Continuous Dynamical Systems, 2012, 32 (9) : 3245-3301. doi: 10.3934/dcds.2012.32.3245
Dieter Mayer, Fredrik Strömberg. Symbolic dynamics for the geodesic flow on Hecke surfaces. Journal of Modern Dynamics, 2008, 2 (4) : 581-627. doi: 10.3934/jmd.2008.2.581
Frédéric Naud. Birkhoff cones, symbolic dynamics and spectrum of transfer operators. Discrete & Continuous Dynamical Systems, 2004, 11 (2&3) : 581-598. doi: 10.3934/dcds.2004.11.581
Fryderyk Falniowski, Marcin Kulczycki, Dominik Kwietniak, Jian Li. Two results on entropy, chaos and independence in symbolic dynamics. Discrete & Continuous Dynamical Systems - B, 2015, 20 (10) : 3487-3505. doi: 10.3934/dcdsb.2015.20.3487
David Ralston. Heaviness in symbolic dynamics: Substitution and Sturmian systems. Discrete & Continuous Dynamical Systems - S, 2009, 2 (2) : 287-300. doi: 10.3934/dcdss.2009.2.287
Younghwan Son. Substitutions, tiling dynamical systems and minimal self-joinings. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4855-4874. doi: 10.3934/dcds.2014.34.4855
Mike Boyle. The work of Mike Hochman on multidimensional symbolic dynamics and Borel dynamics. Journal of Modern Dynamics, 2019, 15: 427-435. doi: 10.3934/jmd.2019026
S. Eigen, V. S. Prasad. Tiling Abelian groups with a single tile. Discrete & Continuous Dynamical Systems, 2006, 16 (2) : 361-365. doi: 10.3934/dcds.2006.16.361
Van Cyr, Bryna Kra. The automorphism group of a minimal shift of stretched exponential growth. Journal of Modern Dynamics, 2016, 10: 483-495. doi: 10.3934/jmd.2016.10.483
Ben Niu, Weihua Jiang. Dynamics of a limit cycle oscillator with extended delay feedback. Discrete & Continuous Dynamical Systems - B, 2013, 18 (5) : 1439-1458. doi: 10.3934/dcdsb.2013.18.1439
PDF downloads (245)
HTML views (130)
Michael Baake John A. G. Roberts Reem Yassawi
|
CommonCrawl
|
This page of Research Supported by FAPESP assembles referential information on scholarships and research grants, awarded by FAPESP, related to the research theme Graphene.
The display board "FAPESP Support in Numbers" presents the amount of ongoing or concluded scholarships and research grants, awarded by the Foundation, related to the theme Graphene. In the center of the page, you will see some information on scholarships and research grants for development in Brazil and abroad.
On the left side of the page, it is possible to browse the other Research Themes highlighted in science and in society.
Use the "Refine results" list of options to filter information on the research theme Graphene and to get results that are more specific for your search, according to following parameters: Field of Knowledge, type of Research Grants, modality of Scholarships, type of Programs, Institution and status of Research Projects.
Furthermore, in this page, the Maps and Graphs contribute to identifying the geographical distribution of FAPESP sponsorship in the State of São Paulo and to visualizing the historical sequence of funded research by year on the theme Graphene.
Citriculture
Yellow Fever, Zika virus, Dengue, Chikungunya and other arboviruses
Atomic force microscopy (AFM) is a microscopy widely used for surface analysis of films, particularly organic films such as polymers and very thin films such as graphene films. The Kelvin probe force microscopy is a variant of the AFM and very important for analysing areas with different potentials on the film surface, allowing the location of regions of higher or lower electron density...
Multiuser equipment granted in the Project Fapesp 2017/11986-5: interface controler for atomic force microscope, AP.EMU
We are studying several molecular systems with potential technological applications. We have started some studies of systems analogous to graphene and its inorganic derivatives, where pairs of C atoms are replaced by B and N atoms and also by B and P. More recently, we have started studies on carbonic structures based on the azulene molecule. These structures are: polymers, 2-D structur...
Theoretical study of bidimensional systems, AP.R
The present project refers to the synthesis of new nanoadsorbents based on graphene oxide, nanocellulose and chitosan, their characterization and application in the removal of several pollutants in water, especially dyes and drugs. The project has two fronts, the first one aims at the preparation of chitosan beads containing nanocomposites based on crystalline nanocellulose substituted ...
Synthesis of new Nanoadsorbents, their characterizations and applications in pollutant adsorption in water, AP.R
See more Ongoing research grants
19th Brazilian Workshop on Semiconductor Physics - BWSP 2019, AO.R
30th International Conference on diamond and carbon materials, AR.EXT
When describing quantum relativistic spin-1/2 particles moving in a plane one can in principle use Dirac matrices realized by either $2\times 2$ matrices (2-component spinors) or $4\times 4$ matrices (4-component spinors) in the quantum equation of motion. However, when spin is a relevant degree of freedom for particle dynamics, as when na external magnetic field is applied, it is impor...
Probing spin as a degree of freedom in two- and three-dimensional Dirac equations for two-dimensional motion with external magnetic fields, AV.EXT
Electrochemical sensors have been studied for the diagnosis of negligible diseases and cancer, however there are still challenges for the commercial production of the devices. One of the possible improvements is in the choice of nanomaterials for the matrix on which the active layer is deposited. In this project, we will use graphene quantum dots (GQDs) or graphene nanoribbons (GRNs) in...
Electrochemical sensors with a matrix containing graphene quantun dots or nanoribbons for the detection of biomarkers, BP.PD
The main appeal of this project is to increase the safety of air navigation by developing graphene derivative composites in metalloporphyrin paints, which will be used as sensory materials. Polymeric materials have unique characteristics that enable numerous applications. However, there are limitations to the use of these materials, such as the fact that most of them are insulating, mak...
Graphene derivatives obtained by electrochemical exfoliation for use in pressure sensitive nanocomposite, BP.IC
The production and processing of fruits are important activities for the Brazilian economy. However, the processing of fruit generates a large amount of waste, which when discarded in inappropriate places, causes a major environmental problem. Orange and banana are the most produced and consumed fruits in the Brazilian territory, and the residues generated by the processing of these fru...
Development of disposable sensors based on 3D graphene oxide, metal nanoparticles and molecularly imprinted polymers for the determination of phenolic acids in fruticulture waste, BP.DR
See more Ongoing scholarships in Brazil
Liquid chromatography is one of the most visible separation techniques today because of its applications in the solution of analytical problems, such as the development of new drugs, better knowledge about the metabolism of food and drugs in humans,development of new agrochemicals with less volatility and greater efficiency and effectiveness; harmful effects of emerging endocrine disrup...
Synthesis, characterization and evaluation of new materials for extraction micro-columns and their on-line coupling with micro and nano liquid chromatography, BP.IC
Development and nationalization of automated systems for nanostructured coating production, BP.PIPE
Graphene oxide is one of the derivatives of graphene and is a chemically modified compound that exhibits a highly oxidized form of graphite and is susceptible to chemical modifications such as the conjugation of dendrimers. Like graphene, it has good thermal stability, high electronic conductivity and high mechanical strength. In recent years, these nanomaterials have gained prominence ...
Interiorization of graphene oxide by tumor cells in culture: evaluation of mechanisms and effects, BP.IC
See more Completed scholarships in Brazil
The manufacturing of nanocomposites has become a trend in materials science due to the possibility of producing new properties as a result of the combination of two or more phases. Recently, the combination of CaCu3Ti4O12 (CCTO) with other oxides for manufacturing advanced ceramic composites has been explored. There has been particular scientific and technological interest in the use of...
A comparative study of the structural and electrical properties of CACu3Ti4O12/REDUCED graphene oxide ceramic nanocomposites obtained by different sinterization methods, BE.EP.MS
Thermoelectric devices interconvert heat and electricity, leading to a more sustainable, cleaner and safer energy technologies. Traditional inorganic thermoelectrics materials are restricted by rigid geometries and use of expensive components with low earth abundance. Highly porous nanocellulose aerogel feature exceptional thermal insulators and, in combination with conductive polymers,...
Nanocellulose-conducting aerogels for thermoelectric application, BE.EP.PD
This project is focused on the development of memristive systems based on hybrid nanostructures grounded on reduced graphene oxides (rGOs) wrapped in polymers. From a technological point of view, graphene-based electronics is an attractive area as the traditional silicon-based semiconductor technology is approaching fundamental limits. rGOs offer a great flexibility to develop such syst...
Self-assembled Metal-Insulator-Metal structures for Memristor applications, BE.EP.DR
See more Ongoing scholarships abroad
Elucidating the mechanism of the oxygen reduction reaction (ORR), a fundamental process in fuel cells and metal-air batteries, is essential for a successful design of sustainable systems for energy conversion and storage. Despite more than 60 years of research, surface processes taking place during the ORR are non-well understood, being this an important bottleneck for massive commercia...
In situ characterization of the oxygen reduction reaction on carbon based materials by Shell-isolated nanoparticle-enhanced Raman Spectroscopy, BE.PQ
Biodiesel, a fuel derived from vegetable oils or animal fats, is an alternative to petroleum-derived fuels and its use has environmental and economic advantages for Brazil. Biodiesel is produced by the transesterification of vegetable or animal oil with an alcohol, it has as catalyst an acid or base. The base is more effective because it has a higher conversion rate, is faster and does ...
Voltammetric electronic tongues for determination of glycerol in biodiesel samples, BE.EP.IC
Currently, graphene a two-dimensional nanomaterial, have been considered to be an ideal material for the production of polymeric nanocomposites with high electrical and thermal conductivity, excellent mechanical and barrier properties and relatively low production costs. However, the production of graphene for large-scale application and of the polymer/graphene nanocomposites still pre...
Development and characterization of nanocomposites of thermoplastic matrix with reduced graphene oxide by ionizing radiation, BE.PQ
See more Completed scholarships abroad
46 / 46 Ongoing research grants
119 / 105 Completed research grants
67 / 67 Ongoing scholarships in Brazil
182 / 169 Completed scholarships in Brazil
8 / 8 Ongoing scholarships abroad
60 / 60 Completed scholarships abroad
482 / 455 All research grants and scholarships
Display associated processes
Program theme-oriented
Bioenergy Research (BIOEN) (4)
Scholarships in Brazil (249)
Doctorate (31)
Doctorate (Direct) (18)
Post-doctorate (85)
Scientific Initiation (74)
Training Human Resources for Research (Technical Capacity-Building) (8)
Young Researchers (2)
Scholarships abroad (68)
Research Internship (54)
Scholarships abroad - Research Internship - Scientific Initiation (6)
Scholarships abroad - Research Internship - Master's degree (5)
Scholarships abroad - Research Internship - Doctorate (13)
Scholarships abroad - Research Internship - Doctorate (Direct) (4)
Scholarships abroad - Research Internship - Post-doctor (26)
Multi-user Equipment (EMU)
Caracterização de Materiais (2)
Análises de Superficies (1)
Microscopia de sonda (AFM, STM) (1)
Espectroscopia (1)
Imageamento (1)
Infravermelho, Raman (fNIRS) (1)
Caracterização e Análises de Amostras (1)
Medidas óticas (1)
Imageamento de alta resolução (1)
Processos Físicos (3)
Deposição filmes finos (1)
Evaporador (1)
Litografia (2)
Feixes de elétrons (1)
Modificação superfícies – etching (2)
Feixes iônicos (2)
Field of knowledge
Biological Sciences (12)
Biophysics (1)
Cellular Biophysics (1)
Applied Ecology (1)
Ecosystems Ecology (1)
Cytology and Cell Biology (4)
Neuropsychopharmacology (1)
Medical Engineering (1)
Chemical Engineering (10)
Chemical Process Industries (5)
Chemical Technology (3)
Electrical Materials (6)
Electrical, Magnetic and Electronic Circuits (1)
Electrical, Magnetic and Electronic Measurements, Instrumentation (7)
Power Systems (2)
Materials and Metallurgical Engineering (81)
Metallurgical Equipment and Facilities (2)
Nonmetallic Materials (72)
Mechanical Engineering Design (2)
Nuclear Engineering (1)
Applications of Radioisotopes (1)
Sanitary Engineering (7)
Water Supply and Wastewater Treatment (6)
Dental Materials (2)
Pharmaceutical Technology (1)
Interdisciplinary Subjects (15)
Physical Sciences and Mathematics (297)
Analytical Chemistry (44)
Inorganic Chemistry (20)
Physical-Chemistry (49)
Atomic and Molecular Physics (2)
Classical Areas of Phenomenology and Applications (2)
Condensed Matter Physics (141)
Elementary Particle Physics and Fields (18)
General Physics (4)
Probability and Statistics (1)
Regular Grants (2-year grants) (85)
Thematic Grants (5-year grants) (9)
Research Grants - Thematic Project (7)
CNPq - National Institutes of Science and Technology (INCTs) (1)
São Paulo Excellence Chair (SPEC) (1)
Young Investigators Grants (JP-FAPESP) (3)
Young Researchers (JP) (3)
PIPE Grants (4)
Multi-user Equipment Grants (7)
Organization of Scientific Meeting (10)
Grants for Organization of Scientific Meetings (10)
Visiting Researcher Grant (21)
Visiting Researcher Grant - Brazil (3)
Visiting Researcher Grant - International (18)
Participation in Scientific or Technological Meeting (22)
Participation in Scientific or Technological Meeting - Abroad (19)
Participation in Scientific or Technological Meeting - Brazil (3)
Support for Research Publications (Articles, Books) (3)
Institutional Research Infrastructure - Technical Reserve Grants (1)
CNPq (1)
CNPq - INCTs (1)
Coordination of Improvement of Higher Education Personnel (CAPES) (22)
Fundação para a Ciência e a Tecnologia (FCT) (3)
Imperial College, UK (1)
Newton Fund, with FAPESP as a partner institution in Brazil (1)
CONFAP ; Newton Fund, with FAPESP as a partner institution in Brazil (1)
Swinburne University of Technology (1)
Texas Tech University (1)
UK Academies (1)
Universidad de Salamanca (2)
C - INDÚSTRIAS DE TRANSFORMAÇÃO (2)
27 - FABRICAÇÃO DE MÁQUINAS, APARELHOS E MATERIAIS ELÉTRICOS (1)
27.9 - Fabricação de equipamentos e aparelhos elétricos não especificados anteriormente (1)
27.90-2 - Fabricação de equipamentos e aparelhos elétricos não especificados anteriormente (1)
28 - FABRICAÇÃO DE MÁQUINAS E EQUIPAMENTOS (2)
28.2 - Fabricação de máquinas e equipamentos de uso geral (1)
28.29-1 - Fabricação de máquinas e equipamentos de uso geral não especificados anteriormente (1)
28.6 - Fabricação de máquinas e equipamentos de uso industrial específico (2)
28.69-1 - Fabricação de máquinas e equipamentos para uso industrial específico não especificados anteriormente (2)
M - ATIVIDADES PROFISSIONAIS, CIENTÍFICAS E TÉCNICAS (3)
72 - PESQUISA E DESENVOLVIMENTO CIENTÍFICO (2)
72.1 - Pesquisa e desenvolvimento experimental em ciências físicas e naturais (2)
72.10-0 - Pesquisa e desenvolvimento experimental em ciências físicas e naturais (2)
74 - OUTRAS ATIVIDADES PROFISSIONAIS, CIENTÍFICAS E TÉCNICAS (1)
74.9 - Atividades profissionais, científicas e técnicas não especificadas anteriormente (1)
74.90-1 - Atividades profissionais, científicas e técnicas não especificadas anteriormente (1)
P - EDUCAÇÃO (1)
85 - EDUCAÇÃO (1)
85.5 - Atividades de apoio à educação (1)
85.50-3 - Atividades de apoio à educação (1)
Q - SAÚDE HUMANA E SERVIÇOS SOCIAIS (1)
86 - ATIVIDADES DE ATENÇÃO À SAÚDE HUMANA (1)
86.1 - Atividades de atendimento hospitalar (1)
86.10-1 - Atividades de atendimento hospitalar (1)
86.4 - Atividades de serviços de complementação diagnóstica e terapêutica (1)
86.40-2 - Atividades de serviços de complementação diagnóstica e terapêutica (1)
86.6 - Atividades de apoio à gestão de saúde (1)
86.60-7 - Atividades de apoio à gestão de saúde (1)
Ongoing (121)
Program research infrastructure
Technical Training Program (8)
Support of Research Infrastructure (8)
Infrastructure for Research Equipment (7)
Multi-user Equipment (EMU) (7)
Institutional Research Infrastructure - Technical Reserve (1)
Country agreement
Agreements org. class
Higher Education and Research Institutions (5)
Research Funding Agencies (27)
Home research institution
Higher Education Inst. /Not Universities (19)
Centro Universitário Hermínio Ometto (UNIARARAS) (1)
Pró-Reitoria de Pós-Graduação e Pesquisa (1)
Faculdade de Engenharia Química de Lorena (FAENQUIL) (1)
Fundação Educacional Inaciana Padre Sabóia de Medeiros (FEI) (1)
Centro Universitário da FEI (UNIFEI) (1)
Campus de São Bernardo do Campo (1)
Instituto Federal de Educação, Ciência e Tecnologia de São Paulo (IFSP) (1)
Campus Matão (1)
Instituto Tecnológico de Aeronáutica (ITA) (15)
Divisão de Ciências Fundamentais (IEF) (13)
Divisão de Engenharia Aeronáutica (IEA) (1)
Divisão de Engenharia Mecânica (IEM) (1)
Research Institutions (25)
Centro Nacional de Pesquisa em Energia e Materiais (CNPEM) (3)
Centro de Pesquisa e Desenvolvimento (CPqD) (1)
Centro de Pesquisas Renato Archer (CENPRA) (1)
Embrapa Instrumentação Agropecuária (4)
Embrapa Meio-Ambiente (3)
Instituto Nacional de Pesquisas Espaciais (INPE) (9)
Instituto de Aeronáutica e Espaço (IAE) (2)
Instituto de Pesquisas Energéticas e Nucleares (IPEN) (2)
Universidade Brasil (1)
Campus São Paulo (1)
Universidade Estadual Paulista (UNESP) (69)
Campus de Araraquara (22)
Faculdade de Ciências Farmacêuticas (FCFAR) (1)
Instituto de Química (IQ) (21)
Campus de Bauru (4)
Faculdade de Ciências (FC) (4)
Campus de Botucatu (11)
Faculdade de Ciências Agronômicas (FCA) (11)
Campus de Guaratinguetá (4)
Faculdade de Engenharia (FEG) (4)
Campus de Ilha Solteira (2)
Faculdade de Engenharia (FEIS) (2)
Campus de Presidente Prudente (3)
Faculdade de Ciências e Tecnologia (FCT) (3)
Campus de Rio Claro (8)
Instituto de Geociências e Ciências Exatas (IGCE) (8)
Campus de Rosana (2)
Campus de São José do Rio Preto (8)
Instituto de Biociências, Letras e Ciências Exatas (IBILCE) (8)
Campus de São Paulo (5)
Instituto de Física Teórica (IFT) (5)
Universidade Estadual de Campinas (UNICAMP) (105)
Centro de Componentes Semicondutores (CCS) (3)
Faculdade de Ciências Aplicadas (FCA) (4)
Faculdade de Ciências Farmacêuticas (FCF) (1)
Faculdade de Ciências Médicas (FCM) (3)
Faculdade de Engenharia Elétrica e de Computação (FEEC) (5)
Faculdade de Tecnologia (FT) (6)
Instituto de Biologia (IB) (7)
Instituto de Física Gleb Wataghin (IFGW) (50)
Instituto de Matemática, Estatística e Computação Científica (IMECC) (1)
Universidade Federal de São Carlos (UFSCAR) (25)
Centro de Ciências Agrárias (CCA) (3)
Centro de Ciências Exatas e de Tecnologia (CCET) (19)
Centro de Ciências e Tecnologias para a Sustentabilidade (CCTS) (3)
Campus Baixada Santista (1)
Instituto de Saúde e Sociedade (ISS) (1)
Campus Diadema (1)
Instituto de Ciências Ambientais, Químicas e Farmacêuticas (ICAQF) (1)
Campus São José dos Campos (3)
Instituto de Ciência e Tecnologia (ICT) (3)
Universidade Federal do ABC (UFABC) (24)
Centro de Ciências Naturais e Humanas (CCNH) (14)
Centro de Engenharia, Modelagem e Ciências Sociais Aplicadas (CECS) (3)
Centro de Matemática, Computação e Cognição (CMCC) (7)
Universidade Metodista de Piracicaba (UNIMEP) (1)
Faculdade de Engenharia, Arquitetura e Urbanismo (1)
Universidade Presbiteriana Mackenzie (UPM) (74)
Centro de Pesquisas Avançadas em Grafeno, Nanomateriais e Nanotecnologia (MackGrafe) (69)
Escola de Engenharia (EE) (5)
Universidade de Sorocaba (UNISO) (1)
Pró-Reitoria Acadêmica (1)
Universidade de São Paulo (USP) (119)
Escola Politécnica (EP) (2)
Escola de Engenharia de Lorena (EEL) (3)
Escola de Engenharia de São Carlos (EESC) (1)
Faculdade de Filosofia, Ciências e Letras de Ribeirão Preto (FFCLRP) (5)
Faculdade de Medicina de Ribeirão Preto (FMRP) (2)
Faculdade de Odontologia de Bauru (FOB) (1)
Faculdade de Zootecnia e Engenharia de Alimentos (FZEA) (3)
Instituto de Ciências Biomédicas (ICB) (2)
Instituto de Estudos Avançados (IEA) (1)
Instituto de Física (IF) (20)
Instituto de Física de São Carlos (IFSC) (23)
Instituto de Matemática e Estatística (IME) (1)
Instituto de Química de São Carlos (IQSC) (26)
Universidade do Vale do Paraíba (UNIVAP) (6)
Instituto de Pesquisa e Desenvolvimento (IP&D) (6)
Program aplication-oriented
Small Business Research (7)
Innovative Research in Small Business (PIPE) (7)
Map of FAPESP Support in the State of São Paulo
Left-click and hold. Drag the cursor on the map until you see the view you want.
Linha de fomento
Research grants + scholarships: annual cumulative results
Show years by sequential order
Transition (in seconds)
|
CommonCrawl
|
advanced_tools:hopf_bundle
Edit the page and hit Save. See Formatting Syntax for Wiki syntax. Please edit the page only if you can improve it. If you want to test some things, learn to make your first steps on the playground.
====== Hopf Bundle ====== <tabbox Intuitive> <note tip> Explanations in this section should contain no formulas, but instead colloquial things like you would hear them during a coffee break or at a cocktail party. </note> <tabbox Concrete> <blockquote> "The Hopf fibration is a kind of projection from the three-sphere to the two-sphere. The two-sphere is the one you're likely to be familiar with—a beach ball is a good example. The two-sphere is formed by all points which are a constant distance from a center point. We write the two-sphere as to indicate that it is 2-dimensional." <cite>http://nilesjohnson.net/hopf.html</cite> </blockquote> * The best explanation and especially how it is related to magnetic monopoles can be found in chapter 0 of the book Topology, Geometry and Gauge fields by G. Naber. * See also http://math.ucr.edu/home/baez/week141.html * For a nice visualization of the Hopf maps, see https://nilesjohnson.net/hopf.html ---- **Examples** --> $S_1 \to S_0$# Real Numbers $\mathbb{R}$ are used to define the Hopf map $S^1 \to S^0$. The fibre $S^0$ here is the group of unit real numbers $\{-1,1\}$, also known as $Z/2$. <-- --> $S_3 \to S_2$# Complex Numbers $\mathbb{C}$ are used to define the Hopf map $S^3 \to S^2$. This Hopf map describes a magnetic monopole of unit strength. <-- --> $S_7 \to S_4$# Quaternions $\mathbb{H}$ are used to define the Hopf map $S^7 \to S^4$. This Hopf map describes a single instanton. <-- --> $S_{15} \to S_8$# Octonions $\mathbb{O}$ are used to define the Hopf map $S^{15} \to S^8$. **Currently there is no physics application known of this map!** This map is different from the other two, because the fibre $S^7$ (the unit octonions) is not really a group. The reason for this is that octonions aren't associative. <-- Sources for these examples: Section 9.4 "Principal Bundles" in Geometry, Topology and Physics by Nakahara and http://math.ucr.edu/home/baez/week141.html (The classification of the Hopf bundles as listed here is surprisingly similar to the [[http://jakobschwichtenberg.com/classification-of-simple-lie-groups/|classification of all simple Lie groups]]. Each Hurwitz algbra corresponds to one family of simple groups. The octonions play a special role, because they correspond to the exceptional family, which has only a finite number of members.) <tabbox Abstract> <note tip> The motto in this section is: //the higher the level of abstraction, the better//. </note> <tabbox Why is it interesting?> Hopf bundles are the correct mathematical tools that we need to describe the physics around a magnetic monopole or around instantons. <blockquote> Yep. My advisor said that if you want to convince aliens that there's intelligent life on earth, tell them about 1) π and 2) the Hopf map. <cite>https://twitter.com/math3ma/status/837075372617302016</cite> </blockquote> <blockquote> It's famous because the map from the total space to the base was the first example of a topologically nontrivial map from a sphere to a sphere of lower dimension. In the lingo of homotopy theory, we say it's the generator of the group π3(S2). <cite>http://math.ucr.edu/home/baez/week141.html</cite> </blockquote> <blockquote> Hopf's construction of P : S 3 → S 2 was motivated by his interest in what are called the "higher homotopy groups" of the spheres (see Section 2.5). Although this is not our major concern at the moment, we point out that P was the first example of a continuous map $S^m → S^n$ with $m > n $that is not "nullhomotopic" (Section 2.3). From this it follows that the homotopy group $π3 (S^2 )$ is not trivial and this came as quite a surprize in the 1930's. <cite>page 16 in Topology, Geometry and Gauge fields by Naber</cite> </blockquote> <blockquote> "When line bundles are regarded as models for the topological structure underlying the electromagnetic field the Hopf fibration is often called "the magnetic monopole"." <cite>https://ncatlab.org/nlab/show/Hopf+fibration</cite> </blockquote> </tabbox>
Please solve the following equation to prove you're human. 186 +3 = Please keep this field empty:
Edit summary
Note: By editing this page you agree to license your content under the following license: CC Attribution-Share Alike 4.0 International
advanced_tools/hopf_bundle.txt · Last modified: 2018/05/03 09:07 by jakobadmin
|
CommonCrawl
|
Journal of Neurodevelopmental Disorders
Protective role of mirtazapine in adult female Mecp2+/− mice and patients with Rett syndrome
Javier Flores Gutiérrez1,
Claudio De Felice2,
Giulia Natali1,
Silvia Leoncini2,3,
Cinzia Signorini4,
Joussef Hayek3,5 &
Enrico Tongiorgi ORCID: orcid.org/0000-0003-0485-06031
Journal of Neurodevelopmental Disorders volume 12, Article number: 26 (2020) Cite this article
Rett syndrome (RTT), an X-linked neurodevelopmental rare disease mainly caused by MECP2-gene mutations, is a prototypic intellectual disability disorder. Reversibility of RTT-like phenotypes in an adult mouse model lacking the Mecp2-gene has given hope of treating the disease at any age. However, adult RTT patients still urge for new treatments. Given the relationship between RTT and monoamine deficiency, we investigated mirtazapine (MTZ), a noradrenergic and specific-serotonergic antidepressant, as a potential treatment.
Adult heterozygous-Mecp2 (HET) female mice (6-months old) were treated for 30 days with 10 mg/kg MTZ and assessed for general health, motor skills, motor learning, and anxiety. Motor cortex, somatosensory cortex, and amygdala were analyzed for parvalbumin expression. Eighty RTT adult female patients harboring a pathogenic MECP2 mutation were randomly assigned to treatment to MTZ for insomnia and mood disorders (mean age = 23.1 ± 7.5 years, range = 16–47 years; mean MTZ-treatment duration = 1.64 ± 1.0 years, range = 0.08–5.0 years). Rett clinical severity scale (RCSS) and motor behavior assessment scale (MBAS) were retrospectively analyzed.
In HET mice, MTZ preserved motor learning from deterioration and normalized parvalbumin levels in the primary motor cortex. Moreover, MTZ rescued the aberrant open-arm preference behavior observed in HET mice in the elevated plus-maze (EPM) and normalized parvalbumin expression in the barrel cortex. Since whisker clipping also abolished the EPM-related phenotype, we propose it is due to sensory hypersensitivity. In patients, MTZ slowed disease progression or induced significant improvements for 10/16 MBAS-items of the M1 social behavior area: 4/7 items of the M2 oro-facial/respiratory area and 8/14 items of the M3 motor/physical signs area.
This study provides the first evidence that long-term treatment of adult female heterozygous Mecp2tm1.1Bird mice and adult Rett patients with the antidepressant mirtazapine is well tolerated and that it protects from disease progression and improves motor, sensory, and behavioral symptoms.
Rett syndrome (RTT, OMIM #312750) [1, 2] is a postnatal, progressive, non-degenerative neurodevelopmental disorder [3] with an incidence of 1/10,000 female live births [4]. Typical RTT cases arise from de novo mutations in the X-linked gene MECP2 (methyl-CpG binding protein 2, HGNC:6990) [5], a context-dependent global organizer of chromatin architecture regulating transcription of numerous genes [6, 7]. Heterogeneity in MECP2 gene mutations [8] and cellular mosaicism derived from random X-chromosome inactivation (XCI [9];) contribute to highly variable symptomatology [10]. However, four symptoms are common to all typical RTT patients: loss of hand skills, loss of spoken language, gait abnormality, and stereotypic hand movements [11]. RTT clinical progression can be divided into four stages: a developmental stagnation stage, a rapid regression stage with loss of previously learned skills, a stationary stage, and a late motor deterioration stage in adulthood [12]. The latter usually leads to parkinsonism [13], but it often includes an improvement in social communication skills [14]. Because RTT first appears in childhood, most studies have focused on treatments at an early age, while studies on adult RTT are comparatively fewer. However, as relevant advances in medicine have allowed extending life expectancy in women with RTT [15], there is currently a strong need to find pharmacological treatments able to increase the quality of life of adult RTT people.
The demonstration that several RTT neurological defects can be rectified by re-expressing Mecp2 gene in adult mice [16], together with the lack of neuronal loss [17, 18], indicated that RTT is not an irrevocable disease. However, approaches aiming to restore the normal gene dosage are far from being achieved [19], and the only available therapies for RTT are symptomatic, especially to limit seizures [20]. Based on the observation that the monoamines serotonin, noradrenaline, and dopamine are reduced in RTT patients and mouse models [21,22,23], antidepressants have emerged as potential treatments for RTT. Indeed, desipramine, a noradrenaline-reuptake inhibitor, was successfully tested in mouse models commonly used to study RTT [24, 25] but did not show clinical efficacy in a phase-II clinical trial including 36 RTT girls and presented some relevant side effects [26]. Nonetheless, these studies provided relevant reasons to further investigate the noradrenergic pathway. Accordingly, we hypothesized to use mirtazapine (MTZ), a widely used noradrenergic and specific-serotonergic antidepressant (NaSSA) that has an excellent safety profile [27], as it lacks anticholinergic [28] and cardiorespiratory side effects [29]. We previously tested MTZ in male Mecp2tm1.1Bird null mice [20, 30], observing the rescue of several behavioral, physiological, and neuronal morphology phenotypes after only 2 weeks of treatment [31]. However, the male Mecp2y/− knockout mouse model misses several aspects of the disease which instead are present in female mice, such as heterozygosity of Mecp2 mutations due to XCI and a longer lifespan, which allows studies in adulthood [30]. Accordingly, here we investigated the effects of a long-term MTZ treatment in adult female heterozygous (HET) Mecp2tm1.1Bird mice and adult female RTT patients.
Animals were treated according to the institutional guidelines, in compliance with the European Community Council Directive 2010/63/UE for care and use of experimental animals. Authorization for animal experimentation was obtained from the Italian Ministry of Health (Nr. 124/2018-PR, with integration Nr. 2849TON17 for whisker clipping), in compliance with the Italian law D. Lgs.116/92 and the L. 96/2013, art. 13. All efforts were made to minimize animal suffering and to reduce the number of animals used. For animal production, Mecp2 HET females (Mecp2+/−, B6.129P2(C)-Mecp2tm1.1Bird/J, Stock No: 003890. The Jackson Laboratory, USA [30]) were bred with wild-type C57/BL6J male mice (The Jackson Laboratory, USA). We used female Mecp2+/− mice at 6 months of age, when they present several consistent RTT-like phenotypes [30] and represent, therefore, the most reliable mouse model to study RTT. After weaning, wild-type (WT) and HET mice were housed in ventilated cages under 12 h light/dark cycle with food and water ad libitum. No environmental enrichment elements were added to the cages. All experiments were performed blind to the genotype and treatment of animals, and all control animals were WT age-matched littermates of HET mice. Finally, mice were assigned to groups according to the rules indicated by [32].
Mice genotyping
Biopsies from ear punches were incubated with 250 μL of DNA extraction buffer (TRIS 10 mM pH 7.5, EDTA 5 mM pH 8, SDS 0.2%, NaCl 10 mM, proteinase K 0.5 mg/mL) and left overnight at 55 °C. The day after, samples were centrifuged (12000 rpm, 20 min, RT), then 100 μL of the supernatant were mixed with isopropanol (1:1), and precipitated DNA was centrifuged again (12000 rpm, 30 min, RT). Supernatant was then discarded, and three washes with cold 70% ethanol with subsequent centrifugations (12000 rpm, 5 min, RT) were realized. Once ethanol had evaporated, DNA pellets were homogeneously dissolved in milli-Q water. Genotypes were assessed by PCR on genomic DNA extracted from ear-clip biopsies. PCR reactions were performed using specific primers (forward common primer oIMR1436 5′-GGT AAA GAC CCA TGT GAC CC-3′, reverse mutant primer oIMR1437 5′-TCC ACC TAG CCT GCC TGT AC-3′) with 1 U GoTaq polymerase (Promega, Madison, USA), 1X green GoTaq buffer, 0.2 mM dNTPs each, 2.5 mM MgCl2, 0.5 μM of each primer, and 10 ng/μL of genomic DNA, as follows: 95 °C, 3' > 30 cycles: 95 °C, 20"; 58 °C, 20"; 72 °C, 20" > 72 °C, 2'. This PCR generates a 400-bp product for WT allele and an additional 416-bp product for heterozygous mice [31].
Mice treatment
Beginning from 5 months of age, HET females and WT littermates were i.p. injected with vehicle (VEH = 0.9% aqueous solution of NaCl and 5% ethanol) or MTZ (10 mg/kg, ab120068, Abcam, Cambridge, UK). The 10 mg/kg daily dosage in mice is equivalent to 50 mg/day in humans, which is the maximum dose used in patients [33]. To calculate it, we followed a dose by factor method modified from [34]. The next formula was used:
$$ \mathrm{Animal}\ \mathrm{dose}\ \left(\mathrm{mg}/\mathrm{kg}\right)=\frac{\mathrm{human}\ \mathrm{dose}\ \left(\mathrm{mg}/\mathrm{kg}\right)}{{\left[\mathrm{weight}\ \mathrm{mouse}\ \left(\mathrm{kg}\right)/\mathrm{weight}\ \mathrm{human}\left(\mathrm{kg}\right)\right]}^{0.33}} $$
We used standard weights described by [34] (mouse = 0.02 kg; human 60 kg), obtaining a dose of 11.7 mg/kg, which we rounded to 10 mg/kg.
As the half-life of MTZ is about 48 h, mice were treated on alternate days, for 30 days, at 10–11 a.m. For the randomization of groups, we first realized a general phenotypic scoring [16] of all WT and HET mice. Based on the results of this evaluation, we homogeneously divided WT and HET mice to create our four experimental groups: WT-VEH, WT-MTZ, HET-VEH, and HET-MTZ.
Whisker clipping
This procedure was applied only to a group of untreated WT and HET mice. Before the whisker clipping, mice were injected with 100 μg/kg medetomidine (Domitor®, Vetoquinol, Magny-Vernois, France), an α-2 adrenergic agonist with hypnotic and sedative effects, to prevent potentially risky movements. We then left mice into an incubator at 37 °C, and when they were completely sedated, we proceeded to shorten whiskers up to a few millimeters from the skin (leaving untouched whisker bulbs). Immediately after that, unwhiskered mice were i.p. injected with 1 mg/kg atipamezole (Antisedan®, Zoetis, New Jersey, USA), a specific antagonist of α-2 adrenergic receptors. Mice were then kept for some hours in an incubator at 30 °C. When they were completely awake, we put them back in their home cages and waited until the day after to perform the elevated plus maze.
General scheduling of treatment and behavioral testing is presented in Fig. 1a. In short, we first performed a general phenotypic scoring and two motor tests before and during the treatment, to see individual and fast effects of MTZ. At the end of the treatment, we realized further tests for a better assessment of treatment effects. In addition, we realized a separated behavioral assessment of untreated mice where whiskers had been cut.
General health evaluation of 6-month-old WT and HET Mecp2tm1.1Bird female mice before, during, and after treatment with MTZ (10 mg/kg). a Two protocols are shown. In the first, the main experimental timeline is detailed. Before the treatment, we evaluated general health through a specific phenotypic scoring (PS), besides dowel (DT) and horizontal bar (HB) tests. During the treatment phase (30 days), we performed, on alternate days, either i.p. injections of 10 mg/kg MTZ/vehicle (VEH) or behavioural tests: PS, DT, HB, and the nest building test (NB). Within the post-treatment phase (the week following the last injection), we performed DT, rod walk (RD), open-field (OF), and elevated plus maze (EPM) tests. At the end of this phase, mice brains were dissected and prepared for histological analysis. The second protocol was used only on untreated 6-month-old WT and HET mice. These mice were first tested at the dark-light exploration test (D/L). The day after, they were sedated, and their whiskers were clipped. They were then left in their home cages overnight, and the day after they were tested at the EPM. b Main evaluation of the phenotypic scoring along the experimental timeline. c Delta values for the main evaluation of the PS, calculated as the difference between the last evaluation (post-treatment) and the first one (pre-treatment). d Grid representing time course of PS for each single mouse. Within each experimental group, each row represents a different mouse. Data on graphs are expressed as median ± interquartile range, n = 10–11 mice per group. According to results of Shapiro-Wilk test, we performed Kruskal-Wallis test followed by corrected Dunn's post hoc test. Multiple selected comparisons comprehended: WT-VEH vs WT-MTZ, WT-VEH vs HET-VEH, and HET-VEH vs HET-MTZ. ns: p > 0.05 (not shown); *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001
Phenotypic scoring
Mice were removed from their home cages and place onto a laboratory bench for observation at the same time of the day. Phenotype severity was evaluated through a scoring system modified from [16]. The main evaluation comprehends the following features: mobility, gait, hindlimb clasping, tremor, breathing, and state of fur and eyes. Scoring was as follows: 0 = absent or as observed in WT; 1 = moderate phenotype; 2 = severe phenotype. By summing these values, we obtained an average value for each mouse, being 12 points the maximum possible scoring. We also evaluated descriptively the following features: presence of reactive vocalizations, eyes aperture, hierarchy inside the cage based on loss of hair in the back due to barbering behavior, presence of involuntary (clonic and tonic) movements, and grooming during observation. All treated mice (n = 10–11 per group) where analyzed through this scoring.
Dowel test
Following a modified protocol based on [35], we positioned the mouse with its four paws on the free edge of a striped 60-cm long and 10-mm thick wooden dowel and measured the latency to fall (endpoint = 120 s). The test was repeated before, during, and after the treatment. All treated mice (n = 10–11 per group) were analyzed through this test.
Horizontal bar test
Following a modified protocol based on [35], we let the mouse hang, with his forelimbs only, to a horizontal bar and then we measured the latency to fall (endpoint = 30 s). An individual score was assigned to each mouse: 1–5 s = 1 point; 6–10 s = 2 points; 11–20 s = 3 points; 21–30 s = 4 points; no fall = 5 points; travelling to the end of the bar = 6 points. The test was consecutively performed on bars with decreasing diameters (4 and 2 mm). Corresponding scores from both bars were summed, obtaining an individual score for each mouse. All treated mice (n = 10–11 per group) were analyzed through this test.
Nest building test
Following a protocol that takes advantage of the natural inclination of mice to create a nest with different materials [36], we put mice in isolated cages 1 h before the beginning of the dark phase. We kept them in the individual cages for 16 h, in the presence of a 3-g Nestlet (Ancare, New York, USA) but without any other environmental enrichment. The day after, mice were returned to their home cages, nests were visually assessed, and remaining pieces of the Nestlet were weighed. All treated mice (n = 10–11 per group) were analyzed through this test.
Rod walk test
Following a modified protocol based on [35], we positioned the mouse at one edge of a 60-cm wood dowel (standing 50 cm above the floor), where a repulsing stimulus (strong light) was present. On the other side of the dowel, we positioned an attractive stimulus (dark cage with nesting material from the mouse's home cage). Transition time was measured in two consecutive trials, performed with striped dowels of 12 and 10 mm of diameter. All treated mice (n = 10–11 per group) were analyzed through this test.
Each mouse was individually placed in the center of the open field (40 × 40 × 40 cm) and left to freely explore it for 20 min. Movements of mice were recorded from above to avoid interference of the experimenter, and videos were later analyzed with the ANY-maze software (Stoelting, New Jersey, USA), as previously described [31]. In short, we divided the open field area into central, middle, and border zones and measured automatically or manually the following parameters: entries and time spent in each zone, total traveled distance, mean speed, immobility episodes, grooming levels, vertical activity, and hopping behavior. All treated mice (n = 10–11 per group) were analyzed through this test.
Elevated plus maze
Each mouse was individually placed in the center area of a black plexiglass elevated plus maze for a 5-min test session [37]. Mice movements were recorded and later analyzed with the ANY-maze software (Stoelting, New Jersey, USA) to automatically measure entries and time spent in the open and in the closed arms. All treated mice (n = 10–11 per group) and untreated unwhiskered mice (n = 7 per group) were analyzed through this test.
Dark-light exploration
This test was performed as previously described [37]. The apparatus was made of two plexiglass compartments separated by a partition with a small opening. One compartment was transparent and illuminated, while the other was opaque and closed on top. Each mouse was individually placed into the center of the light compartment and allowed to freely explore both areas for 10 min. We video-tracked the movements of mice and then measured the number of transitions between light and dark sides, as well as time spent in each compartment, by using the ANY-maze software (Stoelting, New Jersey, USA). Only untreated unwhiskered WT and HET mice (n = 7 per group) were analyzed through this test.
Histological analyses
All mice were sacrificed at the end of behavioral testing (Fig. 1a), and dissected brains were fixed on PFA 4% solution (24 h, 4 °C). They were then washed and cryoprotected in an increasingly concentrated sucrose solution (20% and 30%) and stored at 4 °C. We produced 20-micron transversal sections with a cryostat (Leica, Wetzlar, Germany) from approximately Bregma +1.54 to Bregma −1.58 [38]. This segment includes our three regions of interest: the primary motor cortex (M1), the primary somatosensory-barrel cortex (S1), and the baso-lateral amygdala (BLA).
Analysis of cortical thickness
PFA-fixed slices from treated mice brains were stained in cresyl violet solution (0.2% cresyl violet (Sigma), 0.5% glacial acetic acid, 0.01 M sodium acetate) for 20 min at 37 °C. Sections were then sequentially dehydrated with rapid washes in: 70% ethanol, 95% ethanol, 100% ethanol, methanol, 1:1 methanol-xylene. Finally, glasses were covered by using Eukitt (Sigma, Saint Louis, USA). Pictures of Nissl-stained brain sections were acquired with a Nikon AMX1200 digital camera on a Nikon E800 Microscope (× 4 magnification). Bregma position was assigned to each slice following [38] and then the FIJI software was used to measure cortical thickness [31]. First, we produced a densitometrical plot profile from a line spanning the cortex perpendicularly from the pial surface to the white matter underlying cortical layer VI. We then identified transitions among layers as variations in the densitometrical plot profile, which reflect cell density changes between cortical layers, and measured their lengths in the plot (see Supplementary Fig. S3). This approach allowed us to measure the total cortical thickness in S1 and M1, and the relative thickness of cortical layers I, II/III–IV, and V–VI in the S1. Eight mice per group and 10–15 slices per mice were analyzed.
Free-floating slices were permeabilized (1% Triton X-100) for 1 h at RT, then incubated in blocking solution (0.1% Triton X-100, 2% BSA) for 1 h at RT, and finally incubated in fresh blocking solution containing 1:5000 anti-parvalbumin antibodies (PV235, Swant) overnight at 4 °C. The day after, slices were washed and then incubated in PBS containing 1:250 Alexa Fluor 568 donkey anti-rabbit antibodies (A-10042, Life Technologies). Slices were finally incubated in 1:1000 Hoechst solution (33342, Sigma).
Analysis of parvalbumin-positive (PV+) cells
Fluorescence images were acquired with a Nikon Eclipse Ti-E epifluorescence microscope equipped with a Nikon DS-Qi2 camera. We first used a × 4 magnification to identify the Bregma position of each slice and then took a large picture with a × 10 magnification (2 × 2 fields, total area = 1,310,720 μm2) of the correspondent regions of interest (Supplementary Fig. S2). Acquisition and analyses were performed by using the NIS-Elements software (v4.60. Nikon). We generated digital boxes (width = 230.34 μm) spanning from the pial surface to the corpus callosum that were superimposed at the level of the barrel and primary motor cortices on each brain slice. PV+ and Hoechst-positive cells were counted automatically after having applied an automatic intensity threshold specific for each picture. From 5 to 10 slices for each mouse, and 4–5 mice per group were analyzed. We used the FIJI software [39] to measure the intensity of PV+ cells. In short, we manually created specific ROIs for PV+ cells somata and then automatically measured their mean pixel intensity. Three additional ROIs were used to measure the background for each picture, which was later subtracted from the mean intensity in the correspondent PV+ cells. Since PV+ positive cells showed a high signal-to-noise ratio after background subtraction, no further post-processing image manipulation was carried out. Between 177 and 283 neurons for each experimental group (for cortical ROIs) and between 38 and 88 neurons for each experimental group (for BLA) were evaluated through this protocol. For histological analyses, each experimental group was composed of 4–5 mice with average levels in behavioral phenotypes.
Study design and patients
This is a retrospective study on 80 adult female patients diagnosed with Rett syndrome. All clinical examinations were performed at the Child Neuropsychiatry Unit, University Hospital Le Scotte, Siena, Italy, between the years 2012 and 2019. Clinical phenotype was compatible with typical RTT for 76 patients (95% of total sample) while 4 presented with preserved speech variant (PSV) [40]. Forty RTT adult female patients harboring a pathogenic MECP2 mutation (37 typical and 3 PSV patients) were treated with MTZ for insomnia and mood disorders (mean age = 23.9 ± 7.9 years, range = 16–47 years; mean MTZ-treatment duration = 1.64 ± 1.0 years, range = 0.08–5.0 years). For comparative purposes, a genetically heterogeneous, age-matched cohort of 40 RTT patients (39 typical and 1 PSV patient) not receiving MTZ was considered as the control group. Informed consent to participate in the study was obtained by the parents/caregivers. Ethical approval was waived by the local Ethics Committee of Azienda Sanitaria of Siena in view of the retrospective nature of the study and all the procedures being performed were part of the routine care. The retrospective investigation was conducted according to the Ethics Guidelines of the institute and the recommendations of the declaration of Helsinki and the Italian DL-No.675/31-12-1996.
Demographic data, type of MECP2 mutation, MTZ dosage, duration of treatment, and clinical severity at baseline as evaluated by Rett syndrome clinical severity scale were collected (Tables 1 and 2). The RCSS, a validated RTT-specific scale designed to assess the severity of key symptoms, was completed by the same clinician at each visit. It consists of 13 items providing a rating of core symptoms of RTT on a Likert scale of either 0 to 4 or 0 to 5 with a maximum total score of 58. Efficacy was gauged through possible variations in illness severity as a function of MTZ treatment based on variations in motor behavioral assessment scale (MBAS) score. The MBAS includes a subset of 37 items, each one ranging 0 to 4, with a total sum ranging 0 to 68. Both scores are known to be proportional to illness severity. Possible changes in the MBAS item subscores were considered. Both RCSS and MBAS were rated by experienced clinicians (JH, CDF). To reduce inter-observer variability, the average score between the two physicians was used. MTZ safety and tolerability were monitored by clinicians during the scheduled visits at the center and through periodic phone contacts with parents. Drug tolerance and the occurrence of possible adverse events were also recorded.
Table 1 Demographics and relevant features of untreated RTT vs patients treated with mirtazapine (MTZ)
Table 2 Dosages and duration of Mirtazapine (MTZ) treatment on patients
Experimental design and statistical analysis
All mouse behavioral experiments were done blindly to the experimenter, and all experimental groups were tested in the same session in a randomized order. As reported in the immunohistological techniques section, we analyzed from 5 to 10 slices for each mouse, in order to have representative single values. For the retrospective analysis of patients treated with MTZ, the sample size was equal to the number of patients treated with MTZ available. For all experiments, we realized a post hoc analysis to determine the power of the analysis by using the G*Power 3 software [41]. We verified that all significant results on which we based our conclusions had an associated power analysis (1-β) higher than 0.8 and, therefore, they can be considered as statistically reliable. Data analysis and data graphics were performed with the GraphPad Prism 7.0 software (GraphPad, La Jolla, California, USA). Parametric data are expressed as mean ± standard deviation (SD) and non-parametric data as median ± interquartile range. To test differences between two groups, Student's t test, Mann-Whitney test, or Wilcoxon signed-rank test were used as appropriate. One-way ANOVA (with the Dunnett's post hoc test) or Kruskal-Wallis test (with the corrected Dunn's post hoc multiple comparisons test) were used for comparing multiple groups. The combined effect of two independent variables was tested using two-way ANOVA (with the Dunnett's post hoc test). The chi-squared test was used for categorical variables in the study of patients. All outliers were detected using Grubb's test.
General health in HET mice is not affected by MTZ
To determine whether a 30-day treatment (through i.p. injections on alternate days) with MTZ at 10 mg/kg (a dose equivalent to the maximum dosage used in humans, 50 mg/day, see details in the "Methods" section) was able to improve general health in adult (6-month-old) HET female mice, we performed a phenotypic analysis using a modified protocol from [16] (see "Methods") at the end of each week of treatment, including a pre-treatment time point (Fig. 1a). The general health of vehicle-treated HET (HET-VEH) mice was significantly reduced compared to vehicle-treated wild-type (WT-VEH) mice already before the treatment (p < 0.0001, corrected Dunn's post hoc test) and remained unchanged along the analyzed period (Fig. 1b). No significant improvement was observed in HET mice treated with MTZ (HET-MTZ) compared to HET-VEH mice at the end of the treatment (p > 0.9999, corrected Dunn's post hoc test (Fig. 1c). Regarding the possible side effects of MTZ, we verified the complete safety of the treatment, as no phenotypic alteration was observed in WT mice treated with MTZ (WT-MTZ) compared to WT-VEH control mice at the end of the treatment (p > 0.9999, corrected Dunn's post hoc test; Fig. 1b).
MTZ prevents the progression of motor deficits in adult HET mice
Progressive motor deficits are a key feature of RTT which has been largely described in patients and in male and female murine models used to study this disease [37, 42]. We evaluated whether MTZ was able to rescue motor phenotypes in adult HET mice by testing motor performance at three different time points: before, during, and after the treatment (Fig. 1a). In the horizontal bar test (HB) both WT-VEH and WT-MTZ mice showed, on average, a better performance at the end compared to the beginning of the testing period, as the mathematical differences (deltas) of scoring between both time points were positive (WT-VEH: delta = 0.80, WT-MTZ: delta = 0.63; Fig. 2a). These results likely reflect the acquisition of learned motor skills upon the repetition of the tests. In the Dowel test using the wooden dowels with a diameter of 10 mm, delta values of latency to fall were equal to zero for every single WT mouse, as all of them reached the endpoint (120 s standing on the dowel) before and after the treatment (Fig. 2b). Instead, HET-VEH mice showed negative delta values in both tests (HB: delta = − 2.22; DT: delta = − 33.4), indicating worsening of the general motor performance (Fig. 2a, b; disaggregated scores in Supplementary Fig. S1A-B, E-F). MTZ treatment significantly prevented HET mice from worsening, as delta values were close to zero in both tests, and even improved performance in half of HET-MTZ mice (Fig. 2a, b). Horizontal bars and dowel tests were also performed in the middle of the treatment to detect possible rapid effects of MTZ, but no significant changes on HET-MTZ mice compared to HET-VEH mice were observed (HB: p > 0.9999; DT: p = 0.6284; corrected Dunn's post hoc test), suggesting that long-term treatment is necessary to see effects of MTZ (compare Fig. 2a, b with Supplementary Fig. S1C-D). Of note, the motor deficits and the recovery of the motor performance could only be seen using the 10-mm diameter dowels and not with the 12-mm dowel (Supplementary Fig. S1), likely because the test on the larger wooden rod was less challenging for the HET mice.
Assessment of motor phenotypes and anxiety-related behaviours of 6-month-old Mecp2tm1.1Bird female mice after treatment with MTZ (10 mg/kg). a Motor learning at the horizontal bar test, measured as the change (delta value) between performance at the end and at the beginning of the treatment. b Motor learning at the dowel test, measured as the change (delta values) between performance at the end and at the beginning of the treatment. c Total travelled distance in the open field. d Scoring of the nests built by mice after 24 h in isolation. e Transient time at the rod walk test. All data are expressed as median ± interquartile range, n = 10–11 mice per each group. f Time spent in open arms in the elevated plus maze in wild-type (WT) and heterozygous (HET) mice, treated with either mirtazapine (MTZ) or vehicle (VEH). A group of untreated HET mice whose whiskers were clipped (HET-UNTw-) is also included in the graph. g Time spent in zones in the dark-light exploration test by untreated (UNT) WT and HET mice. h Number of transitions between the light and the dark zones. i Time spent in each zone in the open field by WT and HET mice after treatment with VEH. All data are expressed as median ± interquartile range, n = 7–11 mice per group. According to results of Saphiro-Wilk test, we performed either one-way ANOVA followed by Sidak's multiple comparison test or Kruskal-Wallis test, followed by corrected Dunn's post hoc test. Two-way ANOVA multiple selected comparisons comprehended: WT-VEH vs WT-MTZ, WT-VEH vs HET-VEH, and HET-VEH vs HET-MTZ. In order to determine an eventual genotype effect, 2-way ANOVA was performed in the open-field test. *p ≤ 0.05, **p ≤ 0.01, ***p ≤ 0.001
In addition to the HB and the DT, three further motor tests were performed only at the end of the treatment. In the open field, used to assess both the general locomotor activity and anxiety levels, HET mice showed a significantly shorter distance traveled during the observation time (20 min) in comparison to WT mice (F = 16.33, p = 0.0003, two-way ANOVA), and MTZ did not show any significant effect in either genotype (F = 1.27, p = 0.2686; Fig. 2c). In the nest building test, which evaluates the fine motricity of the forepaws, we did not find any significant phenotype, as no differences were observed between groups (Fig. 2d). In the rod walk test, a clear deficit of general motor coordination was detected (WT-VEH vs HET-VEH: p = 0.0068, corrected Dunn's post hoc test), but no effect of MTZ treatment was observed (HET-VEH vs HET-MTZ: p > 0.9999; Fig. 2e). Regarding potential side effects of MTZ in mice, we observed no deleterious effects on motor performance, since the scores of WT-MTZ mice were not significantly different from those of WT-VEH mice for any of the motor tests used. In conclusion, we observed a protective effect of MTZ on motor learning in HET-MTZ mice, with no deleterious effects on motor skills in any used test.
MTZ rescues the EPM-related phenotype in adult HET mice
Behavioral abnormalities in RTT patients usually include anxiety episodes elicited by distressful external events [43]. In mice, anxiety-related behaviors are typically assessed through the elevated plus maze (EPM), and a characteristic phenotype has been described in both male [31, 42] and young female Mecp2tm1.1Bird mice [37]. Specifically, these mice tend to explore longer the open arms compared to their wild-type littermates. We tested 6-month-old female Mecp2tm1.1Bird mice in the EPM after a 30-day treatment with MTZ (Figs. 1a, 2f). Our results confirmed the presence of the previously described phenotype (WT-VEH vs HET-VEH: p = 0.0095, corrected Dunn's post hoc test, one-way ANOVA) and demonstrated a complete recovery by MTZ, as HET-MTZ mice explored the maze following the same pattern of WT-VEH mice (Fig. 2f). These results are perfectly in agreement with those we previously obtained in male mice treated with MTZ [31].
EPM-related phenotype is likely due to enhanced sensitivity of whiskers
According to the standard interpretation, an extended time of exploration of the open arms in the EPM reflects a state of lower anxiety or a high risk-taking behavior. However, this contrasts with the typical symptomatology found in RTT patients who instead present increased levels of anxiety [44]. We then hypothesized that the aberrant behavior of HET mice at the EPM could represent an avoidance of the closed arms due to whisker hypersensitivity, rather than a preference for the open arms. According to this supposition, the narrow, closed arms of the EPM would represent a disturbing stimulus for HET mice. To verify this hypothesis, untreated 6-month-old HET mice were submitted to clipping of all the whiskers (HET-UNTw-) and on the day after were tested at the EPM. We observed that the open-arm preference phenotype described before was completely abolished in HET-UNTw- mice (Fig. 2f). To corroborate these results, we investigated if any anxiety-related phenotype was present in adult female Mecp2tm1.1Bird mice. Firstly, we tested untreated WT and HET mice at the dark/light test (D/L), a classical anxiety test that is performed in a large platform that mice can explore almost without stimulation of the whiskers. This test showed an absence of any anxiety phenotype in untreated HET mice (time in the light zone, WT-UNT vs HET-UNT: t = 1.090, p = 0.2970; unpaired Student's t test; Fig. 2g). In agreement with results obtained in motor testing, the number of transitions between the light and the dark areas was significantly lower in the HET-UNT mice (t = 2.585, p = 0.0239; Fig. 2h). Furthermore, we analyzed the anxiety-related parameters of the open-field test, which confirmed the absence of any anxiety phenotype also in this test (U = 11, p = 0.0663, Mann-Whitney test; Fig. 2i). Altogether, these findings strongly support the view that whisker hypersensitivity underlies the preference for the open arms at the EPM observed in both HET female (present study and [37]) and null male Mecp2tm1.1Bird mice [31, 42].
Alterations in parvalbumin-positive cells are rescued by MTZ
A previous study conducted in 5xFAD transgenic mice, an animal model used to study Alzheimer's disease, evidenced a preference for the open arms in the EPM similar to the behavior of our adult HET female mice [45]. The authors further demonstrated that 5xFAD mice presented a decreased expression of parvalbumin (PV) in barrel cortex interneurons. Parvalbumin-positive (PV+) cells represent a subpopulation of fast-firing GABAergic cells which accounts for 40% of all interneurons in the rodents' neocortex [46]. Authors suggested that the reduced levels of PV, which acts as a buffer for free Ca2+ rise following action potentials [47], could reflect attenuation of the inhibitory activity of PV+ cells, thus possibly causing whisker hypersensitivity [45]. Notably, altered levels of PV expression in layers II–III of both primary somatosensory and primary motor cortices of Mecp2tm1.1Jae male mice have been recently described, suggesting an alteration of the excitation/inhibition balance [48, 49]. We thus hypothesized that the phenotype observed in Mecp2tm1.1Bird adult female mice could be linked to alterations in PV+ cells. To test this hypothesis, the cell density and the staining intensity of PV+ cells were measured in three brain areas: primary somatosensory-barrel cortex (S1BF, all layers), primary motor cortex (M1, all layers), and basolateral amygdala (BLA) (Fig. 3; Supplementary Fig. S2). Since in Rett syndrome protein expression is not only modified by cell-autonomous mechanisms but may also be influenced by the non-cell-autonomous environment [50], we decided not to take into account the genotype of each individual cell but rather evaluate the general PV expression in specific brain areas of heterozygous mice. Of note, we have verified the heterozygosis level, and we found that the proportion of Mecp2-KO cells/total cells is on average around 50% in both HET groups (treated and untreated) and therefore the two groups are fully comparable. Whereas neither the general cell density of Hoechst-positive cells nor the thickness of the S1 and M1 cortex was significantly different between groups (Supplementary Fig. S3), the density of PV+ cells in HET mice was significantly increased compared to WT mice in the three analyzed brain areas (M1: F = 9.20, p = 0.0096; S1BF 11.76, p = 0.0050; BLA: F = 6.79, p = 0.0244; two-way ANOVA). The increased PV+ cell density was not modified by MTZ treatment in any analyzed areas (Fig. 3d–f). Furthermore, the PV immunoreactivity intensity in these neurons was significantly increased in the primary motor cortex (WT-VEH vs HET-VEH: U = 14,189, p = 0.0049, Mann-Whitney test; Fig. 3g), while it resulted significantly decreased in both barrel cortex (U = 24,070, p = 0.0403; Fig. 3h) and BLA (U = 625.5, p = 0.0138; Fig. 3i). Importantly, by comparing HET-VEH and HET-MTZ mice, we observed a complete recovery to normal PV immunoreactivity levels in the motor cortex upon MTZ treatment (U = 11,304, p < 0.0001; Fig. 3g). Moreover, the low PV immunoreactivity observed in the barrel cortex of HET-VEH mice was significantly increased in HET-MTZ mice, even beyond WT-VEH levels (U = 30,430, p < 0.0102; Fig. 3h). In contrast, in the BLA, the differences in PV staining in HET-MTZ mice were not statistically significant (U = 1909, p = 0.2694; Fig. 3i). Taken together, these cytological results are evidence of a structural effect of MTZ on neuronal networks and are in perfect agreement with the observed motor and somatosensory behavioral results.
Analysis of the effects of treatment with MTZ (10 mg/kg) on parvalbumin-positive interneurons (PV+ INs) in 6-month-old Mecp2tm1.1Bird female mice. a–c Examples of PV and Hoechst staining of brain sections in the three regions of interest (ROIs) analyzed: the primary motor cortex, the barrel cortex, and the basolateral amygdala (BLA). (scale bar: 50 μm). d–f Density of PV+ INs in the three ROIs. g–i Mean intensity of PV+ INs' somata within the three ROIs. All data are expressed as median ± interquartile range, n = 4–5 mice per group; 177–283 neurons per mouse brain (for cortical ROIs) or 38–88 neurons per mouse brain (for BSA). According to results of Saphiro-Wilk test, either one-way ANOVA, followed by Tukey's post hoc test, or Kruskal-Wallis test, followed by corrected Dunn's post hoc test was performed. Multiple selected comparisons comprehended: WT-VEH vs WT-MTZ, WT-VEH vs HET-VEH, and HET-VEH vs HET-MTZ. In order to determine an eventual genotype effect, 2-way ANOVA was performed (d and f). ns: p > 0.05 (not shown); *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001
Prolonged MTZ treatment prevents from worsening and/or improves RTT symptoms
In addition to the pre-clinical studies, we performed a retrospective analysis on 80 adult female RTT patients (mean age ± SD = 23.1 ± 7.5 years, range = 16–47 years; Table 1) that were admitted to the Hospital "Santa Maria alle Scotte" in Siena during the same time period. In half of these RTT patients (n = 40), MTZ was used for treating anxiety and mood disorder with poor sleep quality, according to standard medical indications [51]. Rett clinical severity scale (RCSS) and motor behavioral assessment scale (MBAS) were routinely used as core measures to monitor disease progression by two well-experienced clinicians (JH and CDF), and these two scales were retrospectively analyzed to evaluate MTZ effects on patients.
The treated cohort (MTZ) was compared with a group of 40 untreated (UNT) RTT patients of comparable age (MTZ: age = 23.9 ± 7.9 years, range = 16–47 years vs UNT group 22.3 ± 7.2 years, range = 16–40 years; p = 0.354) and disease severity at the initial visit (MTZ treated group RCSS ± SD = 22.6 ± 6.5, range 7–37 vs UNT group 24.7 ± 8.2, range 7–44; p = 0.230, Mann-Whitney test for independent samples; Table 1). Also, the global and the area-specific MBAS scores were comparable in the two groups (MTZ-treated group global MBAS ± SD = 58.7 ± 6.5 vs UNT group 56.4 ± 9.0; p = 0.596, Mann-Whitney test for independent samples; Table 1). Mean daily and daily/kg body weight doses of MTZ were 11.72 ± 6.96 mg/day (range = 3.75–30.0) and 0.28 ± 0.18 mg/kg body weight/day (range = 0.17–1.07), respectively (Table 2). The mean duration of MTZ treatment was equal to 1.64 ± 1.0 years (range = 0.08–5.0; Table 2). The treatment was well tolerated by 36 patients, while 4 patients were discontinued due to paradoxical anxiety behavior (Table 2). Thus, the NNT (number needed to treat)/NNH (number needed to harm) for this adverse event was estimated to be 10.25 (95% CI 5.026–259.8) with a relative risk of 9.0 (95% CI 0.5004 to 161.87) as compared to the untreated RTT population (z statistic = 1.49; p = 0.1361). The observed drug intolerance was unrelated to patients' age (p = 0.5054), clinical stage (p = 0.6864), RTT clinical phenotype (typical vs. preserved speech variant; p = 0.5533), disease severity (RCSS at baseline, p = 0.2307; MBAS at baseline, p = 0.4983), or per kg MTZ dosage (p = 0.3913) (see Table 3).
Table 3 Demographics and relevant features of RTT patients well tolerating MTZ treatment vs patients exhibiting drug intolerance
Given the RTT progressive natural history and the wide inter-individual variability, we evaluated the delta variations (i.e., the difference between values at the end of the observational time period vs values at the initial time point) of both clinical scores and MBAS sub-scores, in order to estimate the possible efficacy of MTZ. UNT patients showed significantly worsened symptoms (i.e., increased values in the values post-treatment vs. pre-treatment) for both total RCSS (t = 3.766, p = 0.0005, two-tailed t test for paired samples; Fig. 4a) and MBAS scores (t = 4.759, p < 0.0001, two-tailed t test for paired samples; Fig. 4c). At the opposite, the MTZ-treated group showed significantly improved clinical scores (i.e., reduced values in the values post-treatment vs. pre-treatment) for both RCSS (t = 3.250, p = 0.0024 two-tailed t test for paired samples; Fig. 4a), and MBAS (p < 0.0001, two-tailed Wilcoxon matched-pairs rank test; Fig. 4c), thus indicating improvement of symptoms. In addition, when comparing delta values of both groups, we observed in MTZ-treated patients a highly significant reduction (i.e., clinical improvement) for both RCSS (p < 0.0001, Mann-Whitney; Fig. 4b) and MBAS values (p < 0.0001, Mann-Whitney; Fig. 4d) with respect to the untreated group. Interestingly, Spearman rank correlation analysis showed that delta changes of MBAS subscores were independent of treatment dosage and duration (analysis not showed). MBAS is a 37-item scale categorized in three main areas: social behavior (M1, 16 items), oro-facial/respiratory (M2, 7 items), and motor/physical signs (M3, 14 items) [52]. Specific results for each MBAS item are shown Table 4 (all results) and in Fig. 5 (statistically significant results, only). Comparing the delta value medians for each MBAS item in the MTZ-treated and UNT patient cohorts, we found statistically significant results in the MTZ-treated group for 22 items in all the three areas of the test with 10/16 items of the M1 social behavior area, 4/7 items of the M2 oro-facial/respiratory area, and 8/14 items of the M3 motor/physical signs area (Table 4). More in detail, we observed that MTZ treatment induced a significant improvement in 5 items of the M1 area, namely, lack of sustained interest/apathy, irritability, hyperactivity, aggressiveness, self-aggressiveness (for all, p < 0.001 Mann-Whitney two-tailed test; Table 4), and showed a statistically significant protective effect for the other 17 items, in which MTZ treatment prevented further worsening of the symptoms (Table 4). By a closer look at the type of symptoms described by the 22 MBAS items that resulted significantly modified, we noted that MTZ treatment induced significant effects on clusters of signs that we found useful to group in a slightly different way with respect to the originally described MBAS areas [52]. According to this view, MTZ promoted highly significant improvements in an irritability/aggressiveness cluster of symptoms (Fig. 5a: irritability, hyperactivity, aggressiveness, self-aggressiveness, biting—the last sign did not progress), and prevented from worsening in social interactions (Fig. 5b: regression of communication skills, verbal language deficit, poor social/eye contact, unresponsiveness, hypomimia, lack of sustained interest/apathy—the last shows actually, an improvement); motor dysfunctions (Fig. 5c: hand-stereotypies, feeding difficulties, dystonia, dyskinesia, hypertonia/rigidity, hyperreflexia, scoliosis); and respiratory/autonomic dysfunctions (Fig. 5d: breath holding, hyperventilation, vasomotor disturbances). Finally, we also found that MTZ treatment markedly reduced the number of night awakenings (Fig. 5e, p < 0.0001). Taken together, these results are evidence of the positive effects of MTZ in multiple disease domains in adult female RTT patients.
Effects of MTZ treatment on RCSS and MBAS in RTT patients. a Rett clinical severity scale (RCSS), scores before and after the treatment (or an equivalent time period when no treatment was present) for each patient are individually represented. b Comparison of RCSS delta averages between groups. c Motor behavioural assessment scale (MBAS), scores before and after the treatment (or an equivalent time period when no treatment was present) for each patient are individually represented. d Comparison of MBAS delta averages between groups. Data are expressed as either paired values (a, c) or boxplots (b, d). In all cases, according to results of Shapiro-Wilk test, we used paired Student's t test or Wilcoxon signed rank test for comparisons between pre and post-treatment time points, while we used unpaired Student's t test or Mann-Whitney test to compare averages between treated (n = 11) and untreated (n = 11) groups. ns: p > 0.05 (not shown); *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001
Table 4 Changes of clinical features from MBAS scoring scale in the untreated (UNT) and mirtazapine-treated (MTZ) RTT patients cohort
Effects of long MTZ treatment on MBAS subitems and sleep disturbances in RTT patients. a Irritability and aggressiveness, b social interactions, c motor dysfunctions, d respiratory and autonomic dysfunctions, and e night awakenings. (Complete results are shown in Table 4). All data are expressed as boxplots. In all cases, according to results of Shapiro-Wilk test, we used paired Student's t test or Mann-Whitney test to compare averages between treated (n = 11) and untreated (n = 11) groups. ns: p > 0.05 (not shown); *p ≤ 0.05; **p ≤ 0.01; ***p ≤ 0.001
In this study, we found interesting parallelisms between MTZ effects in adult female Mecp2tm1.1Bird mice and patients. First, apart from four patients showing a paradoxical anxiety reaction, we did not observe any significant side effects of MTZ long-term treatment in neither Mecp2tm1.1Bird mice (30 days) nor RTT patients (up to 5 years). This is a key point in terms of the future usability of this drug in RTT patients, as a high safety profile is a mandatory requirement for any chronic pharmacological treatment. Secondly, MTZ fully rescued an avoidance behavior observed in HET mice when tested in the elevated plus maze, likely reflecting a sensory hypersensitivity of whiskers associated to a reduced PV expression in the barrel cortex. In rodents, whiskers' hyperstimulation is associated with irritability and aggressiveness, and MTZ significantly improved irritability, aggressiveness, self-aggressiveness, hyperactivity, and biting in RTT patients. Thirdly, MTZ prevented deterioration of motor skills in Mecp2tm1.1Bird mice and RTT patients. Indeed, by using two motor tests (horizontal bar and dowel test), which had never been used before in Mecp2tm1.1Bird female mice, we showed a full rescue of motor skills and motor learning deficits by MTZ. In parallel, in adult RTT patients, MTZ prevented worsening of hand-stereotypies, feeding difficulties, dystonia, dyskinesia, hypertonia/rigidity, hyperreflexia, and scoliosis. Finally, while the general health scoring was unmodified in HET mice, we reported that in adult RTT patients, MTZ treatment significantly improved general health (evaluated through the RCSS) as well as night-awakenings, and behavioral features regarding social interactions assessed through the MBAS scale, such as regression of communication skills, verbal language deficit, poor social/eye contact, unresponsiveness, hypomimia, and lack of sustained interest/apathy. Furthermore, although we could not carry out similar studies in mice, we found in treated patients a significant protective effect of MTZ on respiratory/autonomic dysfunctions including breath holding, hyperventilation, and vasomotor disturbances. Considering the limited plasticity present in both adult mice and adult humans, it is remarkable that MTZ could decelerate or even improve the RTT and RTT-like phenotypes.
The tests included in our behavioral battery were selected according to previous studies [37, 42] and the order in which we administered the different tests to the mice was based on considerations reported in [53]. We added two new tests that were never used before in assessing motor deficits in animal models of the Rett syndrome, namely the horizontal bar and the dowel tests. The rationale for introducing these new tests is that, since they are almost pure motor tests in which the emotional state of the mouse is practically irrelevant, they were the only tests which could be repeated without introducing a confounding bias. In contrast, all other tests included in our study are subjected to the emotional state of the animal and therefore common practice in behavioral testing on mice is to administer these tests only once in the behavioral test battery. Because of the lack of emotional content, both horizontal bar and the dowel tests could be repeated before, in the middle, and at the end of the treatment time period. This experimental setting allowed us to monitor the motor learning curve of the different mouse groups during the observation time. Of note, WT mice did not show any improvement (or learning) in the dowel test through the treatment period because all of them showed an optimal performance already at the first time they underwent the test, i.e., they did not fall from the wooden dowel for the entire duration of the test (120 s). In contrast, considering that most mice of the group of HET-MTZ showed a very bad performance at the pre-treatment time point, their progressive improvement at the intermediate and then at the end of the treatment compared to the pre-treatment likely reflects a motor learning that was taking place during repeated testing.
In terms of age and genetics, the Mecp2tm1.1Bird female mice we used were optimal for comparisons with the adult RTT patients investigated. According to the criteria reviewed in [54], we calculated that the age of HET female mice used in our study (6 months) was equivalent to approximately 25 human years, which is perfectly comparable with the mean age of patients included in the retrospective study (23.1 ± 7.5 years). Moreover, just like female RTT patients, HET female mice present a variable dosage of the Mecp2 gene, depending on the mosaicism pattern created by the random inactivation of one X chromosome [44]. The heterozygosity in our mouse colony is on average around 50%, meaning that half of the cells express the Mecp2 wild-type allele, with a range comprised between 30 and 70% (JFG and ET unpublished observations). Despite all its advantages as an animal model to study RTT, Mecp2tm1.1Bird female mice show limitations as well. In particular, the high weight gain, common to all HET female mice starting from the third month of life but present only in a low percentage of RTT patients (around 8–10%) obviously impairs the performance in motor tests. This may explain the limited improvement in motor tests, even if MTZ induced a full recovery of normal PV expression in the primary motor cortex. Another limitation regards the fact that social behavior deficits are not evident in adult HET female mice [37, 42] and, therefore, we could not compare this disease dimension, which is relevant in RTT people. Finally, the positive effect of MTZ on aggressiveness observed in patients could not be compared with an analogous behavior in female HET mice. Aggression in mice is usually measured using the resident–intruder test, which evaluates territorial behavior in male rodents [55], while female mice do not display aggressive behavior towards strangers [20, 56]. Using the resident–intruder test, enhanced aggressive behavior has been observed in male conditional knock-out mice lacking Mecp2 within serotonergic cells [21], but no increased aggressiveness was found in Mecp2308/Y male mice [20, 57], although they display enhanced avoidance behaviors [58].
In this study, we propose that Mecp2tm1.1Bird mice's aberrant open arm preference in the EPM is due to sensory hypersensitivity. As previously described in RTT literature [37, 42], we confirmed that female HET mice spend more time exploring the open arms compared to WT littermates. Furthermore, we demonstrate that MTZ can completely normalize the altered behavior at the EPM and that clipping of whiskers in HET mice produced a fully equivalent rescue. This finding strongly suggests that the specific phenotype of HET mice in the EPM is actually not related to abnormal anxiety levels but rather to an avoidance of the closed arms, possibly due to the hypersensitivity of the whiskers. The fact that both open-field and light-dark exploration tests did not show any anxiety-related phenotype in HET mice further strengthens our hypothesis. Mechanical hypersensitivity is an emerging finding in RTT research. Two recent studies showed marked skin hyperinnervation [59], and mechanical hypersensitivity in one male rat model proposed to study RTT, and somatosensory and viscerosensory alteration in female rats of the same model [59, 60]. In striking agreement, another study demonstrated increased sensory fiber innervation in skin biopsies from RTT patients [61]. In RTT people, somatosensory disturbances may range from elevated pain threshold or heat insensitivity [62] to hypersensitivity [63, 64]. Notably, a tactile hypersensitivity on skin and hair has often been noted during routine examinations of patients during our clinical study, likely contributing to increased irritability observed in adult RTT patients (JH and CDF unpublished observations). Accordingly, here we put forward the idea that partial or complete restoration of normal responses to somatosensory stimuli could be one positive outcome of MTZ treatment in RTT.
Regarding the possible mechanism underlying the rescue of the aberrant behavior in the EPM, our histological analyses shed some light on it. Although the overall number of PV+ neurons was slightly increased in both motor and somatosensory cortices, we found that expression of PV was significantly downregulated in the barrel cortex of HET-VEH mice. A previous study showed that altered PV expression correlates with both a decrease in the number of glutamatergic afferents and an increase in GABAergic afferents to PV+ cells, likely leading to a depression of the electrical activity in these cells [49]. Of note, PV+ neurons have been directly linked to the generation of gamma oscillations that modulate afferent information in the barrel cortex [65]. Hence, we propose that in HET-VEH the internal inhibition in the barrel cortex might be decreased, resulting in an upregulated processing into the somatosensory pathway originating in the whiskers. Under these circumstances, whiskers' sensory stimuli perceived as normal by WT mice are disturbing for HET mice, and as a consequence, HET mice stay longer in the open arms. Within this paradigm, MTZ could have normalized the processing of somatosensory stimuli from whiskers through the rescue of the normal internal inhibition in the barrel cortex, as indicated by the restored levels of PV in HET-MTZ mice. Interestingly, the bottle-brush test, a behavioral test based on a robust stimulation of whiskers commonly used to evaluate irritability in rodents, generates aggressive and defensive behavior [66, 67]. Although human behavior in this domain is much more complex as compared to mice, results obtained in experimental animals at the EPM may be assimilated to the irritability and aggressiveness behaviors which were significantly improved in MTZ-treated patients.
In this study, we have shown the MBAS patients' data organized according to the classical M1-M3 areas that were originally described in the literature [52]. However, based on a clear affinity between symptoms such as "self-aggressiveness" and "aggressiveness" of the area M1 with "biting self or others" of the M2 area, we propose here an original aggregation of the results into four clusters of improvement/protection effects induced by mirtazapine, namely, irritability/aggressiveness, social interactions (and probably, also attention), motor dysfunctions, and respiratory/autonomic dysfunctions. We believe that this novel subdivision may provide a better basis for further functional studies on the mechanism of action of mirtazapine.
Although the reported effect of MTZ on sleep awakening became known from questionnaires completed by parents or caregivers, it appears to be relevant in the management of disease in terms of quality of sleep and quality of life for the patients and their families. In previous studies, sleep problems have been largely investigated in RTT as a relevant disease symptom by using questionnaires (Young D, 2007), polysomnographic analysis [68], or electroencephalogram (EEG) spectral analysis [69]. In particular, variations in sleep problems were shown to be deeply linked to other RTT clinical features, with a high prevalence according to the mutation type. In particular, sleep problems were least likely to be reported in those RTT patients bearing a MECP2 gene with C terminal deletions, while night-time laughing was commonest in those with a large deletion and daytime napping was commonly reported in cases with p.R270X, p.R255X, and p.T158M mutations [70]. However, our data show no apparent correlation of sleep disturbances with any specific mutations or mutation type. Sleep quality and respiratory problems may be connected because previous polysomnographic studies in Rett syndrome showed a ventilatory impairment linked to altered respiratory parameters and low mean oxygen desaturation percentage during sleep [68]. Interestingly, in our retrospective case series, we found that treated patients also showed a small, but statistically significant, improvement in hypoventilation ad breath-holding.
A final, important consideration regards the four patients with Rett syndrome who presented paroxysmal anxiety levels and were discontinued after a short time treatment with MTZ. Analysis of their clinical characteristics and genotype did not lead to any conclusive indications of an existing correlation between specific mutations or mutation categories and these adverse events. In fact, one patient had the common missense mutation T158M, one had the Y141X early truncating mutation while the other two were bearing the frequent R294X early truncating mutation. However, another two patients bearing the same R294X mutation could be treated for 2 years reaching the maximal dose of 30 mg/day without any adverse effect. Thus, further investigation is needed to explore the genotype-phenotype relationship for this adverse event.
The finding that MTZ treatment can rescue a large number of abnormal behavioral features in adult RTT patients is a breakthrough in the field. A recent large observational study on the natural history of RTT has identified behavioral problems as a key emerging issue in RTT [71]. Interestingly, mood and behavioral problems are considered major issues, together with sleeping disturbances, seizures, and breathing problems, in terms of impact on the quality of life of families taking care of RTT girls [72]. Besides the efficacy in regulating sleep and mood disturbances, which was the initial reason for prescription in adult RTT patients, it is noteworthy that MTZ treatment could improve or slow down several of the behavioral abnormalities listed into the RTT natural history study. Taken together, our findings support the potential of MTZ for improving the quality of life of RTT patients and their families.
The datasets used and/or analyzed during the current study are available from the corresponding author on reasonable request.
BLA:
Baso-lateral amygdala
D/L:
Dark/light test
DNA:
Deoxyribonucleic acid;
EDTA:
EPM:
GABA:
HET:
Heterozygous-Mecp2 female mice
MBAS:
Motor behavioral assessment scale
MECP2 :
Methyl-CpG binding protein 2
MTZ:
PCR:
PSV:
Preserved speech variant
RCSS:
Rett clinical severity scale
RTT:
Sodium-dodecyl-sulphate
Tris:
Tris(hydroxymethyl)aminomethane
UNT:
Untreated female mice
UNTw- :
Untreated female mice without whiskers
VEH:
wild type-Mecp2 female mice;
XCI:
X-chromosome inactivation;
Hagberg B, Aicardi J, Dias K, Ramos O. A progressive syndrome of autism, dementia, ataxia, and loss of purposeful hand use in girls: Rett's syndrome: report of 35 cases. Ann Neurol. 1983 Oct;14(4):471–9.
Rett A. On a remarkable syndrome of cerebral atrophy associated with hyperammonaemia in childhood. Wien Med Wochenschr. 2016 Sep;166(11–12):322–4.
Leonard H, Cobb S, Downs J. Clinical and biological progress over 50 years in Rett syndrome. Nat Rev Neurol. 2017 Jan;13(1):37–51.
Fehr S, Bebbington A, Nassar N, Downs J, Ronen GM, De Klerk N, et al. Trends in the diagnosis of Rett syndrome in Australia. Pediatr Res. 2011 Sep;70(3):313–9.
Amir RE, Van den Veyver IB, Wan M, Tran CQ, Francke U, Zoghbi HY. Rett syndrome is caused by mutations in X-linked MECP2, encoding methyl-CpG-binding protein 2. Nat Genet. 1999 Oct;23(2):185–8.
Connolly DR, Zhou Z. Genomic insights into MeCP2 function: a role for the maintenance of chromatin architecture. Curr Opin Neurobiol. 2019 Dec;59:174–9.
Della Ragione F, Vacca M, Fioriniello S, Pepe G, D'Esposito M. MECP2, a multi-talented modulator of chromatin architecture. Brief Funct Genomics. 2016 Jun;12:elw023.
Cuddapah VA, Pillai RB, Shekar KV, Lane JB, Motil KJ, Skinner SA, et al. Methyl-CpG-binding protein 2 (MECP2) mutation type is associated with disease severity in Rett syndrome. J Med Genet. 2014 Mar;51(3):152–8.
Przanowski P, Wasko U, Zheng Z, Yu J, Sherman R, Zhu LJ, et al. Pharmacological reactivation of inactive X-linked Mecp2 in cerebral cortical neurons of living mice. Proc Natl Acad Sci. 2018;115(31):7991–6.
Erlandson A, Hagberg B. MECP2 abnormality phenotypes: clinicopathologic area with broad variability. J Child Neurol. 2005;20(9):727–32.
Neul KWE, Glaze DG, Christodoulou J, Clarke AJ, Bahi-Buisson N, et al. Rett syndrome: revised diagnostic criteria and nomenclature. Ann Neurol. 2010;68(6):944–50.
Chahrour M, Zoghbi HY. The story of Rett syndrome: from clinic to neurobiology. Neuron. 2007 Nov;56(3):422–37.
Roze E, Cochen V, Sangla S, Bienvenu T, Roubergue A, Leu-Semenescu S, et al. Rett syndrome: an overlooked diagnosis in women with stereotypic hand movements, psychomotor retardation, parkinsonism, and dystonia? Mov Disord Off J Mov Disord Soc. 2007;22(3):387–9.
Ip JPK, Mellios N, Sur M. Rett syndrome: insights into genetic, molecular and circuit mechanisms. Nat Rev Neurosci. 2018 Jun;19(6):368–82.
Tarquinio DC, Hou W, Neul JL, Kaufmann WE, Glaze DG, Motil KJ, et al. The changing face of survival in Rett syndrome and MECP2-related disorders. Pediatr Neurol. 2015 Nov;53(5):402–11.
Guy J, Gan J, Selfridge J, Cobb S, Bird A. Reversal of neurological defects in a mouse model of Rett syndrome. Science. 2007 Feb 23;315(5815):1143–7.
Reiss AL, Faruque F, Naidu S, Abrams M, Beaty T, Bryan RN, et al. Neuroanatomy of Rett syndrome: a volumetric imaging study. Ann Neurol. 1993 Aug;34(2):227–34.
Zoghbi HY. Rett syndrome and the ongoing legacy of close clinical observation. Cell. 2016 Oct;167(2):293–7.
Clarke AJ, Abdala Sheikh AP. A perspective on 'cure' for Rett syndrome. Orphanet J Rare Dis. 2018 02;13(1):44.
Katz DM, Berger-Sweeney JE, Eubanks JH, Justice MJ, Neul JL, Pozzo-Miller L, et al. Preclinical research in Rett syndrome: setting the foundation for translational success. Dis Model Mech. 2012 Nov 1;5(6):733–45.
Samaco RC, Mandel-Brehm C, Chao H-T, Ward CS, Fyffe-Maricich SL, Ren J, et al. Loss of MeCP2 in aminergic neurons causes cell-autonomous defects in neurotransmitter synthesis and specific behavioral abnormalities. Proc Natl Acad Sci. 2009 Dec 22;106(51):21966–71.
Santos M, Summavielle T, Teixeira-Castro A, Silva-Fernandes A, Duarte-Silva S, Marques F, et al. Monoamine deficits in the brain of methyl-CpG binding protein 2 null mice suggest the involvement of the cerebral cortex in early stages of Rett syndrome. Neuroscience. 2010 Oct;170(2):453–67.
Temudo T, Rios M, Prior C, Carrilho I, Santos M, Maciel P, et al. Evaluation of CSF neurotransmitters and folate in 25 patients with Rett disorder and effects of treatment. Brain and Development. 2009 Jan;31(1):46–51.
Roux J-C, Dura E, Moncla A, Mancini J, Villard L. Treatment with desipramine improves breathing and survival in a mouse model for Rett syndrome. Eur J Neurosci. 2007;8.
Zanella S, Mebarek S, Lajard A-M, Picard N, Dutschmann M, Hilaire G. Oral treatment with desipramine improves breathing and life span in Rett syndrome mouse model. Respir Physiol. 2008;6.
Mancini J, Dubus J-C, Jouve E, Roux J-C, Franco P, Lagrue E, et al. Effect of desipramine on patients with breathing disorders in RETT syndrome. Ann Clin Transl Neurol. 2018 Feb;5(2):118–27.
Szegedi A, Schwertfeger N. Mirtazapine: a review of its clinical efficacy and tolerability. Expert Opin Pharmacother. 2005 Apr;6(4):631–41.
Burrows GD, Kremer CME. Mirtazapine: clinical advantages in the treatment of depression. J Clin Psychopharmacol. 1997 Apr;17:34S–9S.
Hartmann P. Mirtazapine: a newer antidepressant. Am Fam Physician. 1999 Jan 1;59(1):159–61.
Guy J, Hendrich B, Holmes M, Martin JE, Bird A. A mouse Mecp2-null mutation causes neurological symptoms that mimic Rett syndrome. Nat Genet. 2001 Mar;27(3):322–6.
Bittolo T, Raminelli CA, Deiana C, Baj G, Vaghi V, Ferrazzo S, et al. Pharmacological treatment with mirtazapine rescues cortical atrophy and respiratory deficits in MeCP2 null mice. Sci Rep. 2016 Apr;6(1):19796.
Landis SC, Amara SG, Asadullah K, Austin CP, Blumenstein R, Bradley EW, et al. A call for transparent reporting to optimize the predictive value of preclinical research. Nature. 2012 Oct;490(7419):187–91.
Anttila SAK, Leinonen EVJ. A review of the pharmacological and clinical profile of mirtazapine. CNS Drug Rev. 2006;7(3):249–64.
Nair A, Jacob S. A simple practice guide for dose conversion between animals and human. J Basic Clin Pharm. 2016;7(2):27.
Deacon RMJ. Measuring motor coordination in mice. J Vis Exp. 2013;75:2609.
Deacon RM. Assessing nest building in mice. Nat Protoc. 2006;1(3):1117–9.
Vogel Ciernia A, Pride MC, Durbin-Johnson B, Noronha A, Chang A, Yasui DH, et al. Early motor phenotype detection in a female mouse model of Rett syndrome is improved by cross-fostering. Hum Mol Genet. 2017 May 15;26(10):1839–54.
Paxinos G, Franklin K. The mouse brain in stereotaxic coordinates, compact. 3rd Edition. Academic Press; 2008. 256 p.
Schindelin J, Arganda-Carreras I, Frise E, Kaynig V, Longair M, Pietzsch T, et al. Fiji: an open-source platform for biological-image analysis. Nat Methods. 2012 Jul;9(7):676–82.
Renieri A, Mari F, Mencarelli MA, Scala E, Ariani F, Longo I, et al. Diagnostic criteria for the Zappella variant of Rett syndrome (the preserved speech variant). Brain and Development. 2009 Mar;31(3):208–16.
Faul F, Erdfelder E, Lang A-G, Buchner A. G*Power 3: a flexible statistical power analysis program for the social, behavioral, and biomedical sciences. Behav Res Methods. 2007 May;39(2):175–91.
Samaco RC, McGraw CM, Ward CS, Sun Y, Neul JL, Zoghbi HY. Female Mecp2+/− mice display robust behavioral deficits on two different genetic backgrounds providing a framework for pre-clinical studies. Hum Mol Genet. 2013;22(1):96–109.
Mount RH, Hastings RP, Reilly S, Cass H, Charman T. Behavioural and emotional features in Rett syndrome. Disabil Rehabil. 2001 Mar 15;23(3–4):129–38.
Barnes KV, Coughlin FR, O'Leary HM, Bruck N, Bazin GA, Beinecke EB, et al. Anxiety-like behavior in Rett syndrome: characteristics and assessment by anxiety scales. J Neurodev Disord. 2015 Dec;7(1):30.
Flanigan TJ, Xue Y, Kishan Rao S, Dhanushkodi A, McDonald MP. Abnormal vibrissa-related behavior and loss of barrel field inhibitory neurons in 5xFAD transgenics. Genes Brain Behav. 2014 Jun;13(5):488–500.
Jiang X, Lachance M, Rossignol E. Involvement of cortical fast-spiking parvalbumin-positive basket cells in epilepsy. Prog Brain Res. 2016;226:81–126.
Permyakov EA, Uversky VN, Permyakov SE. Parvalbumin as a pleomorphic protein. Curr Protein Pept Sci. 2017;18(8):780–94.
Nelson SB, Valakh V. Excitatory/inhibitory balance and circuit homeostasis in autism spectrum disorders. Neuron. 2015 Aug 19;87(4):684–98.
Morello N, Schina R, Pilotto F, Phillips M, Melani R, Plicato O, et al. Loss of Mecp2 causes atypical synaptic and molecular plasticity of parvalbumin-expressing interneurons reflecting Rett syndrome–like sensorimotor defects. eneuro. 2018;5(5):ENEURO.0086-18.2018.
Kishi N, Macklis JD. MeCP2 functions largely cell-autonomously, but also non-cell-autonomously, in neuronal maturation and dendritic arborization of cortical pyramidal neurons. Exp Neurol. 2010 Mar;222(1):51–8.
Karsten J, Hagenauw LA, Kamphuis J, Lancel M. Low doses of mirtazapine or quetiapine for transient insomnia: a randomised, double-blind, cross-over, placebo-controlled trial. J Psychopharmacol (Oxf). 2017;31(3):327–37.
FitzGerald PM, Jankovic J, Percy AK. Rett syndrome and associated movement disorders. Mov Disord. 1990;5(3):195–202.
Wahlsten D. Chapter 10 - Domains and Test Batteries. In: Wahlsten D, editor. Mouse behavioral testing [Internet]. London: Academic Press; 2011 [cited 2020 Jul 18]. p. 157–75. Available from: http://www.sciencedirect.com/science/article/pii/B9780123756749100102.
Dutta S, Sengupta P. Men and mice: relating their ages. Life Sci. 2016 May;152:244–8.
Oortmerssen GAV. Origin of variability in behaviour within and between inbred strains of mice (Mus musculus) - a behaviour genetic study. 1971;91.
Veenema AH, Bredewold R, Neumann ID. Opposite effects of maternal separation on intermale and maternal aggression in C57BL/6 mice: link to hypothalamic vasopressin and oxytocin immunoreactivity. Psychoneuroendocrinology. 2007 Jun;32(5):437–50.
Moretti P, Bouwknecht JA, Teague R, Paylor R, Zoghbi HY. Abnormalities of social interactions and home-cage behavior in a mouse model of Rett syndrome. Hum Mol Genet. 2005 Jan 15;14(2):205–20.
Pearson BL, Defensor EB, Caroline Blanchard D, Blanchard RJ. Applying the ethoexperimental approach to neurodevelopmental syndrome research reveals exaggerated defensive behavior in Mecp2 mutant mice. Physiol Behav. 2015 Jul;146:98–104.
Bhattacherjee A, Winter M, Eggimann L, Mu Y, Gunewardena S, Liao Z, et al. Motor, somatosensory, viscerosensory and metabolic impairments in a heterozygous female rat model of Rett syndrome. Int J Mol Sci. 2017;19(1):97.
Bhattacherjee A, Mu Y, Winter MK, Knapp JR, Eggimann LS, Gunewardena SS, et al. Neuronal cytoskeletal gene dysregulation and mechanical hypersensitivity in a rat model of Rett syndrome. Proc Natl Acad Sci. 2017 Aug 15;114(33):E6952–61.
Symons FJ, Barney CC, Byiers BJ, McAdams BD, Foster SXYL, Feyma TJ, et al. A clinical case–control comparison of epidermal innervation density in Rett syndrome. Brain Behav. 2019 May;9(5):e01285.
Downs J, Géranton SM, Bebbington A, Jacoby P, Bahi-Buisson N, Ravine D, et al. Linking MECP2 and pain sensitivity: the example of Rett syndrome. Am J Med Genet A. 2010 May;152A(5):1197–205.
Symons FJ, Byiers B, Tervo RC, Beisang A. Parent-reported pain in Rett syndrome. Clin J Pain. 2013;29(8):744–6.
Barney CC, Feyma T, Beisang A, Symons FJ. Pain experience and expression in Rett syndrome: subjective and objective measurement approaches. J Dev Phys Disabil. 2015 Aug;27(4):417–29.
Cardin JA, Carlén M, Meletis K, Knoblich U, Zhang F, Deisseroth K, et al. Driving fast-spiking cells induces gamma rhythm and controls sensory responses. Nature. 2009 Jun;459(7247):663–7.
Kimbrough A, de Guglielmo G, Kononoff J, Kallupi M, Zorrilla EP, George O. CRF 1 receptor-dependent increases in irritability-like behavior during abstinence from chronic intermittent ethanol vapor exposure. Alcohol Clin Exp Res. 2017;41(11):1886–95.
Riittinen M-L, Lindroos F, Kimanen A, Pieninkeroinen E, Pieninkeroinen I, Sippola J, et al. Impoverished rearing conditions increase stress-induced irritability in mice: impoverished rearing increases irritability. Dev Psychobiol. 1986;19(2):105–11.
Carotenuto M, Esposito M, D'Aniello A, Rippa CD, Precenzano F, Pascotto A, et al. Polysomnographic findings in Rett syndrome: a case–control study. Sleep Breath. 2013 Mar;17(1):93–8.
Ammanuel S, Chan WC, Adler DA, Lakshamanan BM, Gupta SS, Ewen JB, et al. Heightened delta power during slow-wave-sleep in patients with Rett syndrome associated with poor sleep efficiency. Ferri R, editor. PLOS ONE. 2015 Oct 7;10(10):e0138113.
Young D, Nagarajan L, de Klerk N, Jacoby P, Ellaway C, Leonard H. Sleep problems in Rett syndrome. Brain and Development. 2007 Nov;29(10):609–16.
Buchanan CB, Stallworth JL, Scott AE, Glaze DG, Lane JB, Skinner SA, et al. Behavioral profiles in Rett syndrome: data from the natural history study. Brain and Development. 2019 Feb;41(2):123–34.
Corchón S, Carrillo-López I, Cauli O. Quality of life related to clinical features in patients with Rett syndrome and their parents: a systematic review. Metab Brain Dis. 2018 Dec;33(6):1801–10.
The authors thank Alessio Cortelazzo, Ph.D., for technical support. A special thanks to Dr. Eng. Giancarlo Dughera who inspired us to study the effects of mirtazapine in adults with Rett syndrome.
This study has been supported by a grant from Associazione Italiana Rett Onlus (AIRETT) to ET.
Department of Life Sciences, University of Trieste, Via Licio Giorgieri, 5 - 34127, Trieste, Italy
Javier Flores Gutiérrez, Giulia Natali & Enrico Tongiorgi
Neonatal Intensive Care Unit, Azienda Ospedaliera Universitaria Senese, 53100, Siena, Italy
Claudio De Felice & Silvia Leoncini
Child Neuropsychiatry Unit, Azienda Ospedaliera Universitaria Senese, 53100, Siena, Italy
Silvia Leoncini & Joussef Hayek
Department of Molecular and Developmental Medicine, University of Siena, 53100, Siena, Italy
Cinzia Signorini
Pediatric Speciality Center "L'Isola di Bau", 50052 Certaldo, Florence, Italy
Joussef Hayek
Javier Flores Gutiérrez
Claudio De Felice
Giulia Natali
Silvia Leoncini
Enrico Tongiorgi
JFG performed and analyzed behavioral experiments and histological analyses in animals and wrote the paper. GN collaborated in all experiments on animals. JH conceived the MTZ treatment and laid the foundation for the retrospective analysis in patients. CS analyzed the clinical data. CDF, SL, and JH collected and analyzed the clinical data; ET conceived the study and wrote the paper. All coauthors contributed to revising the paper. The authors read and approved the final manuscript.
Correspondence to Enrico Tongiorgi.
Mice. Animals were treated according to the institutional guidelines, in compliance with the European Community Council Directive 2010/63/UE for care and use of experimental animals. Authorization for animal experimentation was obtained from the Italian Ministry of Health (Nr. 124/2018-PR, with integration Nr. 2849TON17 for whiskers clipping), in compliance with the Italian law D. Lgs.116/92 and the L. 96/2013, art. 13.
Patients. Informed consent to participate in the study was obtained by the parents/caregivers. Ethical approval was waived by the local Ethics Committee of Azienda Sanitaria of Siena in view of the retrospective nature of the study, and all the procedures being performed were part of the routine care. The retrospective investigation was conducted according to the Ethics Guidelines of the institute and the recommendations of the declaration of Helsinki and the Italian DL-No.675/31-12-1996.
Supplementary Figures
Flores Gutiérrez, J., De Felice, C., Natali, G. et al. Protective role of mirtazapine in adult female Mecp2+/− mice and patients with Rett syndrome. J Neurodevelop Disord 12, 26 (2020). https://doi.org/10.1186/s11689-020-09328-z
Intellectual disability disorders
Irritability/aggressiveness
Motor learning deficits
Somatosensory cortex
Parvalbumin neurons
|
CommonCrawl
|
Synchronization in networks with strongly delayed couplings
Chaotic dynamics in a transport equation on a network
October 2018, 23(8): 3427-3460. doi: 10.3934/dcdsb.2018284
Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces
Minghua Yang 1, , Zunwei Fu 2,3,, and Jinyi Sun 4,
Department of Mathematics, Jiangxi University of Finance and Economics, Nanchang, 330032, China
Department of Mathematics, Linyi University, Linyi, 276000, China
School of Mathematical Sciences, Qufu Normal University, Qufu, 273100, China
Department of Mathematics, Northwest Normal University, Lanzhou, 730070, China
* Corresponding author: [email protected]
Received April 2017 Revised April 2018 Published October 2018 Early access August 2018
Fund Project: This paper was partially supported by the National Natural Science Foundation of China (Grant Nos. 11671185, 11771195), the Postdoctoral Science Foundation of Jiangxi Province (Grant No. 2017KY23) and Educational Commission Science Programm of Jiangxi Province (Grant No. GJJ170345).
In this article, we consider the Cauchy problem to chemotaxis model coupled to the incompressible Navier-Stokes equations. Using the Fourier frequency localization and the Bony paraproduct decomposition, we establish the global-in-time existence of the solution when the gravitational potential ϕ and the small initial data
$(u_{0}, n_{0}, c_{0})$
in critical Besov spaces under certain conditions. Moreover, we prove that there exist two positive constants σ0 and
$C_{0}$
such that if the gravitational potential
$\phi \in \dot B_{p,1}^{3/p}({\mathbb{R}^3})$
and the initial data
$(u_{0}, n_{0}, c_{0}): = (u_{0}^{1}, u_{0}^{2}, u_{0}^{3}, n_{0}, c_{0}): = (u_{0}^{h}, u_{0}^{3}, n_{0}, c_{0})$
satisfies
$\begin{equation*}\begin{aligned} &\left(\left\|u_{0}^{h}\right\|_{\dot{B}^{-1+3/p}_{p, 1}(\mathbb{R}^3)}+\left\|\left(n_{0}, c_{0}\right)\right\|_{\dot{B}^{-2+3/q}_{q, 1}(\mathbb{R}^3) \times \dot{B}^{3/q}_{q, 1}(\mathbb{R}^3)}\right)\\ &\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \times\exp\left\{C_{0}\left(\left\|u_{0}^{3}\right\|_{\dot{B}^{-1+3/p}_{p, 1}(\mathbb{R}^3)}+1\right)^{2}\right\} \leq \sigma_{0}\end{aligned}\end{equation*}$
$p, q$
$1<p, q<6,\frac{1}{p}+\frac{1}{q}>\frac{2}{3}$
$\frac{1}{\min\{p, q\}}-\frac{1}{\max\{p, q\}} \le \frac{1}{3}$
, then the global existence results can be extended to the global solutions without any small conditions imposed on the third component of the initial velocity field
$u_{0}^{3}$
in critical Besov spaces with the aid of continuity argument. Our initial data class is larger than that of some known results. Our results are completely new even for three-dimensional chemotaxis-Navier-Stokes system.
Keywords: Chemotaxis-Navier-Stokes equation, global solution, Besov space, Littlewood-Paley theory.
Mathematics Subject Classification: Primary: 35K55, 35Q35, 92C17; Secondary: 42B35.
Citation: Minghua Yang, Zunwei Fu, Jinyi Sun. Global solutions to Chemotaxis-Navier-Stokes equations in critical Besov spaces. Discrete & Continuous Dynamical Systems - B, 2018, 23 (8) : 3427-3460. doi: 10.3934/dcdsb.2018284
M. Chae, K. Kang and J. Lee, Existence of smooth solutions to coupled chemotaxis-fuid equations, Discrete Contin. Dyn. Syst., 33 (2013), 2271-2297. doi: 10.3934/dcds.2013.33.2271. Google Scholar
M. Chae, K. Kang and J. Lee, Global existence and temporal decay in Keller-Segel models coupled to fluid equations, Comm. Partial Differ. Equ., 39 (2014), 1205-1235. doi: 10.1080/03605302.2013.852224. Google Scholar
M. Chae, K. Kang and J. Lee, Asymptotic behaviors of solutions for an aerobatic model coupled to fluid equations, J. Korean Math. Soc., 53 (2016), 127-146, arxiv.org/abs/1403.3713 doi: 10.4134/JKMS.2016.53.1.127. Google Scholar
Y. Chemin, Théorémes dunicité pour le systéme de Navier-Stokes tridimensionnal, J. Anal. Math., 77 (1999), 27-50. doi: 10.1007/BF02791256. Google Scholar
Y. Chemin, M. Paicu and P. Zhang, Global large solutions to 3-D inhomogeneous Navier-Stokes system with one slow variable diffusion, J. Differential Equations, 256 (2014), 223-252. doi: 10.1016/j.jde.2013.09.004. Google Scholar
Y. Chemin and P. Zhang, On the global well-posedness to the 3D incompressible anisotropic Navier-Stokes equations, Comm. Math. Phys., 272 (2007), 529-566. doi: 10.1007/s00220-007-0236-0. Google Scholar
H. Choe and B. Lkhagvasuren, Global existence result for Chemotaxis Navier-Stokes equations in the critical Besov spaces, J. Math. Anal. Appl., 446 (2017), 1415-1426. doi: 10.1016/j.jmaa.2016.09.050. Google Scholar
Y. Chung and K. Kang, Existence of global solutions for a Keller-Segel-fluid equations with nonlinear diffusion, arXiv: 1504.02274. Google Scholar
P. Constantin, A. Kiselev, L. Ryzhik and A. Zlatoš, Diffusion and mixing in fluid flow, Ann. Math., 168 (2008), 643-674. doi: 10.4007/annals.2008.168.643. Google Scholar
R. Danchin, Fourier Analysis Methods for PDEs, Lecture Notes, 14, November 2005. Google Scholar
R. Danchin, Local theory in critical spaces for compressible viscous and heat-conducting gases, Comm. Partial Differ. Equ., 26 (2001), 1183-1233. doi: 10.1081/PDE-100106132. Google Scholar
R. Duan, A. Lorz and P. Markowich, Global solutions to the coupled chemotaxis-fuid equations, Comm. Partial Differ. Equ., 35 (2010), 1635-1673. doi: 10.1080/03605302.2010.497199. Google Scholar
R. Duan and Z. Xiang, Global solutions to the coupled chemotaxis-fuid equations, Int. Math. Res. Not. IMRN, (2014), 1833-1852. doi: 10.1093/imrn/rns270. Google Scholar
E. Espejo and T. Suzuki, Reaction terms avoiding aggregation in slow fluids, Nonlinear Anal. Real World Appl., 21 (2015), 110-126. doi: 10.1016/j.nonrwa.2014.07.001. Google Scholar
M. Francesco, A. Lorz and P. Markowich, Chemotaxis-fluid coupled model for swimming bacteria with nonlinear diffusion: global existence and asymptotic behavior, Discrete Contin. Dyn. Syst., 28 (2010), 1437-1453. doi: 10.3934/dcds.2010.28.1437. Google Scholar
H. Fujita and T. Kato, On the Navier-Stokes initial value problem I, Arch. Ration. Mech. Anal., 16 (1964), 269-315. doi: 10.1007/BF00276188. Google Scholar
B. Hajer, Y. Chemin and R. Danchin, Fourier Analysis and Nonlinear Partial Differential Equations, Springer, Berlin, 2011. doi: 10.1007/978-3-642-16830-7. Google Scholar
M. Herrero and L. Velazquez, A blow-up mechanism for chemotaxis model, Ann. Sc. Norm. Super. Pisa., 24 (1997), 633-683. Google Scholar
D. Horstman and G. Wang, Blow-up in a chemotaxis model without symmetry assumptions, European J. Appl. Math., 12 (2001), 159-177. doi: 10.1017/S0956792501004363. Google Scholar
J. Huang, M. Paicu and P. Zhang, Global well-posedness of incompressible inhomogeneous fluid systems with bounded density or non-Lipschitz velocity, Arch. Ration. Mech. Anal., 209 (2013), 631-382. doi: 10.1007/s00205-013-0624-x. Google Scholar
T. Kato, Strong Lp-solutions of the Navier-Stokes equation in $\mathbb{R}^m$, with applications to weak solutions, Math. Z., 187 (1984), 471-480. doi: 10.1007/BF01174182. Google Scholar
E. Keller and L. Segel, Initiation of slide mold aggregation viewd as an instability, J. Theor. Biol., 6 (1970), 399-415. Google Scholar
E. Keller and L. Segel, Model for chemotaxis, J. Theor. Biol., 30 (1971), 225-234. doi: 10.1016/0022-5193(71)90050-6. Google Scholar
A. Kiselev and L. Ryzhik, Enhancement of the traveling front speeds in reaction-diffusion equations with advection, Annales de l'Institut Henri Poincaré(C) Non Linear Analysis, 18 (2001), 309-358. doi: 10.1016/S0294-1449(01)00068-3. Google Scholar
A. Kiselev and X. Xu, Suppression of chemotactic explosion by mixing, Arch. Ration. Mech. Anal., 222 (2016), 1077-1112. doi: 10.1007/s00205-016-1017-8. Google Scholar
J. Leray, Sur le mouvement d'un liquide visqueux emplissant l'espace, Acta Math., 63 (1934), 193-248. doi: 10.1007/BF02547354. Google Scholar
H. Koch and D. Tataru, Well-posedness for the Navier-Stokes equations, Adv. Math., 157 (2001), 22-35. doi: 10.1006/aima.2000.1937. Google Scholar
H. Kozono and M. Nakao, Periodic solutions of the Navier-Stokes equations in unbounded domains, Tohoku Math. J., 48 (1996), 33-50. doi: 10.2748/tmj/1178225411. Google Scholar
H. Kozono and M. Yamazaki, Semilinear heat equations and the Navier-Stokes equation with distributions in new function spaces as initial data, Comm. Partial Differ. Equ., 19 (1994), 959-1014. doi: 10.1080/03605309408821042. Google Scholar
H. Liu and H. Gao, Global well-posedness and long time decay of the 3D Boussinesq equations, J. Differential Equations, 263 (2017), 8649-8665. doi: 10.1016/j.jde.2017.08.049. Google Scholar
J. Liu and A. Lorz, A coupled chemotaxis-fluid model: Global existence, Ann. Inst. H.Poincaré Anal. Non Linéaire, 28 (2011), 643-652. doi: 10.1016/j.anihpc.2011.04.005. Google Scholar
Q. Liu, T. Zhang and J. Zhao, Global solutions to the 3D incompressible nematic liquid crystal system, J. Differential Equations, 258 (2015), 1519-1547. doi: 10.1016/j.jde.2014.11.002. Google Scholar
A. Lorz, Coupled chemotaxis fuid model, Math. Models Methods Appl. Sci., 20 (2010), 987-1004. doi: 10.1142/S0218202510004507. Google Scholar
A. Lorz, A coupled Keller-Segel-Stokes model: Global existence for small initial data and blow-up delay, Commun. Math. Sci., 10 (2012), 555-574. doi: 10.4310/CMS.2012.v10.n2.a7. Google Scholar
Y. Minsuk, L. Bataa and C. Hi, Well posedness of the Keller-Segel Navier-Stokes equations in the critical Besov spaces, Commun. Pure Appl. Anal., 14 (2015), 2453-2464. doi: 10.3934/cpaa.2015.14.2453. Google Scholar
T. Nagai, T. Senba and K. Yoshida, Applications of the Trudinger-Moser inequality to a parabolic system of chemotaxis, Funkcial Ekvac., 40 (1997), 411-433. Google Scholar
K. Osaki and A. Yagi, Finite dimensional attractors for one-dimensional Keller-Segel equations, Funkcial Ekvac., 44 (2001), 441-469. Google Scholar
M. Paicu, équation anisotrope de Navier-Stokes dans des espaces critiques, Rev. Mat. Iberoam., 21 (2005), 179-235. doi: 10.4171/RMI/420. Google Scholar
M. Paicu and P. Zhang, Global solutions to the 3D incompressible inhomogeneous Navier-Stokes system, J. Funct. Anal., 262 (2012), 3556-3584. doi: 10.1016/j.jfa.2012.01.022. Google Scholar
C. Patlak, Random walk with persistence and external bias, Bull. Math. Biol. Biophys., 15 (1953), 311-338. doi: 10.1007/BF02476407. Google Scholar
F. Planchon, Sur un inégalité de type Poincaré, C. R. Acad. Sci. Paris Sér. I Math., 330 (2000), 21-23. doi: 10.1016/S0764-4442(00)88138-0. Google Scholar
Y. Tao, Boundedness in a chemotaxis model with oxygen consumption by bacteria, J. Math. Anal. Appl., 381 (2011), 521-529. doi: 10.1016/j.jmaa.2011.02.041. Google Scholar
Y. Tao and M. Winkler, Global existence and boundedness in a Keller-Segel-Stokes model with arbitrary porous medium diffusion, Discrete Contin. Dyn. Syst., 32 (2012), 1901-1914. doi: 10.3934/dcds.2012.32.1901. Google Scholar
Y. Tao and M. Winkler, Locally bounded global solutions in a three-dimensional Chemotaxis-Stokes system with nonlinear diffusion, Ann. Inst. H. Poincaré Anal. Non Linéaire, 30 (2013), 157-178. doi: 10.1016/j.anihpc.2012.07.002. Google Scholar
Y. Tao and M. Winkler, Blow-up prevention by quadratic degradation in a two-dimensional Keller-Segel-Navier-Stokes system, Z. Angew. Math. Phys., 67 (2016), Art. 138, 23 pp. doi: 10.1007/s00033-016-0732-1. Google Scholar
Y. Tao and M. Winkler, Eventual smoothness and stabilization of large-data solutions in a three-dimensional chemotaxis system with consumption of chemoattractant, J. Differential Equations, 252 (2012), 2520-2543. doi: 10.1016/j.jde.2011.07.010. Google Scholar
I. Tuval, L. Cisneros, C. Dombrowski, C. W. Wolgemuth, J. O. Kessler and R. E. Goldstein, Bacterial swimming and oxygen transport near contact lines, Proc. Natl. Acad. Sci. USA, 102 (2005), 2277-2282. doi: 10.1073/pnas.0406724102. Google Scholar
P. Wang and D. Zhang, Convexity of Level Sets of Minimal Graph on Space Form with Nonnegative Curvature, J. Differential Equations, 262 (2017), 5534-5564. doi: 10.1016/j.jde.2017.02.010. Google Scholar
M. Winkler, Aggregation vs. global diffusive behavior in the higher-dimensional Keller-Segel model, J. Differential Equations, 248 (2010), 2889-2905. doi: 10.1016/j.jde.2010.02.008. Google Scholar
M. Winkler, Stabilization in a two-dimensional Chemotaxis-Navier-Stokes system, Arch. Ration. Mech. Anal., 211 (2014), 455-487. doi: 10.1007/s00205-013-0678-9. Google Scholar
M. Winkler, Boundedness and large time behavior in a three-dimensional chemotaxis-Stokes system with nonlinear diffusion and general sensitivity, Comm. Partial Differ. Equ., 54 (2015), 3789-3828. doi: 10.1007/s00526-015-0922-2. Google Scholar
M. Winkler, Finite-time blow-up in the higher-dimensional parabolic-parabolic Keller-Segel system, J. Math. Pures Appl., 100 (2013), 748-767. doi: 10.1016/j.matpur.2013.01.020. Google Scholar
M. Winkler, Global large-data solutions in a Chemotaxis-(Navier-)Stokes system modeling cellular swimming in fuid drops, Comm. Partial Differ. Equ., 37 (2012), 319-351. doi: 10.1080/03605302.2011.591865. Google Scholar
M. Winkler, Global weak solutions in a three-dimensional Chemotaxis-Navier-Stokes system, Annales de l'Institut Henri Poincaré (C) Non Linear Analysis, 33 (2016), 1329-1352. doi: 10.1016/j.anihpc.2015.05.002. Google Scholar
M. Yamazaki, Solutions in the Morrey spaces of the Navier-Stokes equation with time-dependent external force, Funkcial. Ekvac., 43 (2000), 419-460. Google Scholar
M. Yamazaki, The Navier-Stokes equations in the weak-$L^n$ space with time-dependent external force, Math. Ann., 317 (2000), 635-675. doi: 10.1007/PL00004418. Google Scholar
M. Yang, Z. Fu and S. Liu, Analyticity and existence of the Keller-Segel-Navier-Stokes equations in critical Besov spaces, Adv. Nonlinear Stud., 18 (2018), 517-535. doi: 10.1515/ans-2017-6046. Google Scholar
M. Yang, Z. Fu and J. Sun, Existence and Gevrey regularity for a two-species chemotaxis system in homogeneous Besov spaces, Sci. China Math., 60 (2017), 1837-1856. doi: 10.1007/s11425-016-0490-y. Google Scholar
M. Yang and J. Sun, Gevrey regularity and Existence of Navier-Stokes-Nernst-Planck-Poisson system in critical Besov spaces, Commun. Pure Appl. Anal., 16 (2017), 1617-1639. doi: 10.3934/cpaa.2017078. Google Scholar
C. Zhai and T. Zhang, Global well-posedness to the 3-D incompressible inhomogeneous Navier-Stokes equations with a class of large velocity, J. Math. Phys. , 56 (2015), 091512, 18 pp. doi: 10.1063/1.4931467. Google Scholar
Q. Zhang, Local well-posedness for the Chemotaxis-Navier-Stokes equations in Besov spaces, Nonlinear Anal. Real World Appl., 17 (2014), 89-100. doi: 10.1016/j.nonrwa.2013.10.008. Google Scholar
Q. Zhang and Y. Li, Global weak solutions for the three-dimensional Chemotaxis-NavierStokes system with nonlinear diffusion, J. Differential Equations, 259 (2015), 3730-3754. doi: 10.1016/j.jde.2015.05.012. Google Scholar
Q. Zhang and Y. Li, Convergence rates of solutions for a two-dimensional Chemotaxis-Navier-Stokes system, Discrete Contin. Dyn. Syst. Ser. B, 20 (2015), 2751-2759. doi: 10.3934/dcdsb.2015.20.2751. Google Scholar
Q. Zhang and X. Zheng, Global well-posedness for the two-dimensional incompressible Chemotaxis-Navier-Stokes equations, SIAM J. Math. Anal., 46 (2014), 3078-3105. doi: 10.1137/130936920. Google Scholar
T. Zhang, Global wellposedness problem for the 3-D incompressible anisotropic Navier-Stokes equations in an anisotropic space, Comm. Math. Phys., 287 (2009), 211-224. doi: 10.1007/s00220-008-0631-1. Google Scholar
Laiqing Meng, Jia Yuan, Xiaoxin Zheng. Global existence of almost energy solution to the two-dimensional chemotaxis-Navier-Stokes equations with partial diffusion. Discrete & Continuous Dynamical Systems, 2019, 39 (6) : 3413-3441. doi: 10.3934/dcds.2019141
Radjesvarane Alexandre, Mouhamad Elsafadi. Littlewood-Paley theory and regularity issues in Boltzmann homogeneous equations II. Non cutoff case and non Maxwellian molecules. Discrete & Continuous Dynamical Systems, 2009, 24 (1) : 1-11. doi: 10.3934/dcds.2009.24.1
Xiaoping Zhai, Zhaoyang Yin. Global solutions to the Chemotaxis-Navier-Stokes equations with some large initial data. Discrete & Continuous Dynamical Systems, 2017, 37 (5) : 2829-2859. doi: 10.3934/dcds.2017122
Yulan Wang. Global solvability in a two-dimensional self-consistent chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - S, 2020, 13 (2) : 329-349. doi: 10.3934/dcdss.2020019
Jiayi Han, Changchun Liu. Global existence for a two-species chemotaxis-Navier-Stokes system with $ p $-Laplacian. Electronic Research Archive, 2021, 29 (5) : 3509-3533. doi: 10.3934/era.2021050
Mimi Dai, Han Liu. Low modes regularity criterion for a chemotaxis-Navier-Stokes system. Communications on Pure & Applied Analysis, 2020, 19 (5) : 2713-2735. doi: 10.3934/cpaa.2020118
Sachiko Ishida. Global existence and boundedness for chemotaxis-Navier-Stokes systems with position-dependent sensitivity in 2D bounded domains. Discrete & Continuous Dynamical Systems, 2015, 35 (8) : 3463-3482. doi: 10.3934/dcds.2015.35.3463
Yao Nie, Jia Yuan. The Littlewood-Paley $ pth $-order moments in three-dimensional MHD turbulence. Discrete & Continuous Dynamical Systems, 2021, 41 (7) : 3045-3062. doi: 10.3934/dcds.2020397
Qingshan Zhang, Yuxiang Li. Convergence rates of solutions for a two-dimensional chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - B, 2015, 20 (8) : 2751-2759. doi: 10.3934/dcdsb.2015.20.2751
Hai-Yang Jin, Tian Xiang. Convergence rates of solutions for a two-species chemotaxis-Navier-Stokes sytstem with competitive kinetics. Discrete & Continuous Dynamical Systems - B, 2019, 24 (4) : 1919-1942. doi: 10.3934/dcdsb.2018249
Changchun Liu, Pingping Li. Time periodic solutions for a two-species chemotaxis-Navier-Stokes system. Discrete & Continuous Dynamical Systems - B, 2021, 26 (8) : 4567-4585. doi: 10.3934/dcdsb.2020303
Jinyi Sun, Zunwei Fu, Yue Yin, Minghua Yang. Global existence and Gevrey regularity to the Navier-Stokes-Nernst-Planck-Poisson system in critical Besov-Morrey spaces. Discrete & Continuous Dynamical Systems - B, 2021, 26 (6) : 3409-3425. doi: 10.3934/dcdsb.2020237
Frederic Heihoff. Global mass-preserving solutions for a two-dimensional chemotaxis system with rotational flux components coupled with a full Navier–Stokes equation. Discrete & Continuous Dynamical Systems - B, 2020, 25 (12) : 4703-4719. doi: 10.3934/dcdsb.2020120
Teng Wang, Yi Wang. Large-time behaviors of the solution to 3D compressible Navier-Stokes equations in half space with Navier boundary conditions. Communications on Pure & Applied Analysis, 2021, 20 (7&8) : 2811-2838. doi: 10.3934/cpaa.2021080
J. Huang, Marius Paicu. Decay estimates of global solution to 2D incompressible Navier-Stokes equations with variable viscosity. Discrete & Continuous Dynamical Systems, 2014, 34 (11) : 4647-4669. doi: 10.3934/dcds.2014.34.4647
Yongfu Wang. Global strong solution to the two dimensional nonhomogeneous incompressible heat conducting Navier-Stokes flows with vacuum. Discrete & Continuous Dynamical Systems - B, 2020, 25 (11) : 4317-4333. doi: 10.3934/dcdsb.2020099
Guangwu Wang, Boling Guo. Global weak solution to the quantum Navier-Stokes-Landau-Lifshitz equations with density-dependent viscosity. Discrete & Continuous Dynamical Systems - B, 2019, 24 (11) : 6141-6166. doi: 10.3934/dcdsb.2019133
Lihuai Du, Ting Zhang. Local and global strong solution to the stochastic 3-D incompressible anisotropic Navier-Stokes equations. Discrete & Continuous Dynamical Systems, 2018, 38 (9) : 4745-4765. doi: 10.3934/dcds.2018209
Xin Zhong. Global strong solution and exponential decay for nonhomogeneous Navier-Stokes and magnetohydrodynamic equations. Discrete & Continuous Dynamical Systems - B, 2021, 26 (7) : 3563-3578. doi: 10.3934/dcdsb.2020246
Zhenhua Guo, Zilai Li. Global existence of weak solution to the free boundary problem for compressible Navier-Stokes. Kinetic & Related Models, 2016, 9 (1) : 75-103. doi: 10.3934/krm.2016.9.75
Minghua Yang Zunwei Fu Jinyi Sun
|
CommonCrawl
|
Calculation of the lower limit of the spoofing-signal ratio for a GNSS receiver-spoofer
Meng Zhou1,
Hong Li1 &
Mingquan Lu1
A receiver-spoofer is one of the most covert global navigation satellite system (GNSS) spoofing attacks and can only be effectively detected by the combination of multiple anti-spoofing technologies. In this paper, an analysis of influencing parameters for receiver-spoofers indicates that the ratio of the spoofing signal amplitude versus the authentic signal amplitude (spoofing-signal ratio) is a key parameter for spoofing results. For a spoofer to ensure covertness, the goal is to maintain a low spoofing-signal ratio. The carrier phase difference and code phase difference between authentic signals and spoofing signals resulted in errors in the position estimation of the target receiver increase the lower limit of the spoofing-signal ratio required for successful spoofing. A spoofing signal alters the phase of local replicate code based on the original balance of a receiver phase discriminator to seize control. Based on this principle, the lower limit of the spoofing-signal ratio that corresponds to various phase discriminator spacings, carrier phase differences, and code phase differences is deduced in this paper. Two tests are designed for the simulation source and authentic navigation signals to verify the deduced formula. The lower limit of the spoofing-signal ratio obtained from these tests matches the calculated results, which proves the validity and effectiveness of the derived algorithm.
A global navigation satellite system (GNSS) is a space-based radio navigation and positioning system that can provide information that can be used to derive three-dimensional coordinates, velocity, and time in all-weather conditions for users anywhere on the surface of the Earth or near-Earth space. GNSSs have been extensively employed in many areas, including precision agriculture, scientific research, environment monitoring, emergency and disaster assessment, safety assurance, positioning of celestial bodies, construction engineering and natural resources, and smart transportation. GNSSs have also created significant social and economic benefits. However, the safety and security of GNSSs have become an increasing concern. If a GNSS signal is inadvertently interfered with or is maliciously attacked, the GNSS user experience will be affected. In severe cases, an accident caused by interference or attack may generate irreversible economic loss or create a significant threat to individual safety. Therefore, a study of GNSS signal anti-interference and anti-spoofing becomes the focus.
Spoofing attack
Spoofing via navigation signal simulator
Initially, a navigation signal simulator was developed to test receiver performance. However, the high-fidelity simulation of an authentic navigation signal raised considerable concerns about GNSS signal security. In 2002, Warner and Johnston et al. rented one simulator and proved that this equipment could spoof popular handheld civilian global positioning system (GPS) receivers in the market [1].
Repeater spoofing
An encrypted navigation signal in a GNSS system, e.g., P code in a GPS, does not have a public interface specification. Therefore, it cannot be spoofed by a navigation signal simulator. However, for this type of navigation signal, the so-called repeater spoofing is an effective spoofing method. As shown in Fig. 1, this spoofing method forwards a received navigation signal to an interference receiver. When a repeater spoofing signal captures control of the receiver, a high-power suppressing signal is emitted to force the receiver into a trapped state to ensure that a spoofing signal can be accepted by the receiver, resulting in significant errors in positioning [2].
Schematic of repeater spoofing
Receiver-spoofer
The concept of a receiver-spoofer was initially proposed by Todd E. Humphreys et al. at the University of Texas in 2008 [3,4,5]. Receiver signal tracking is not interrupted during spoofing, and the power used does not need to be significantly higher than that of an authentic signal. Therefore, a receiver-spoofer has an extremely high level of covertness. The principle is shown in Fig. 2. Based on a received authentic navigation signal, the relative position and the velocity versus the target receiver, a spoofing device calculates the pseudo-range and Doppler shift of an authentic navigation signal received by a target receiver and generates a spoofing signal that is synchronous with the authentic signal. Because this signal is similar to the authentic signal, it takes control without being noticed by the target receiver.
Principle of receiver code loop operation
Todd E. Humphreys et al. successfully spoofed an electric power grid monitoring system time authorization terminal [6], an unmanned aerial vehicle [7], and civilian vessels [8] via this spoofing platform.
Anti-spoofing technology
Anti-spoofing refers to the operation of employing a certain measure or technology to detect and eliminate a GNSS spoofing signal or to hinder the ability of an attacker to spoof a target. Current anti-spoofing technology relies on two approaches. The first approach is to distinguish authentic and spoofing signals via comparison. A GNSS interface specification is publicly available, and a navigation signal generated by a spoofing device cannot be identical to an authentic signal. As the simulation accuracy of a spoofing signal increases, the cost increases as well. Therefore, the spoofing signal can be identified by finding the difference between the spoofing signal and the authentic signal. The second approach is to detect signal abnormality during spoofing. For power suppression spoofing, an alarm is generated by detecting power abnormalities. For a receiver-spoofer, a warning is generated by detecting correlation distortion in the code loop. Based on the two approaches, this study on anti-spoofing measures focuses on the following aspects:
Improving the signal processing algorithm
A new detection process is added to the signal processing algorithm to detect carrier magnitude hop, signal power abnormality, and distortion of correlation [9]. In most circumstances, a spoofing attack is detectable. These methods are easily implemented and do not require major hardware changes. However, the selection of an adequate detection threshold to achieve a balance between a false alarm and a positive detection is critical.
Signal encryption
The essence of signal encryption is to generate unpredictable navigation information that prevents an attacker from counterfeiting a similar signal for spoofing. Currently, numerous institutions have conducted thorough studies of this anti-spoofing technology [10, 11]. A disadvantage of this method is that the signal architecture of a navigation system must be changed, which requires design changes for satellite transmitters and ground receivers. For a relative mature navigation system, such as a GPS, anti-spoofing via this approach is very expensive. For a navigation system in the experimental stage, implementation of signal encryption in the signal architecture is an effective approach to improving the security and reliability of the entire navigation system.
Leveraging external accessories
This detection measure leverages receiver accessories to monitor an abnormal hop in position, velocity, or clock [12]. For a receiver with external reference information, when a spoofing attack causes a significant difference between the result calculated by a receiver and the external reference information, the received signal may contain a spoofing signal.
Signal direction detection
Carrier phase-based signal direction monitoring is a common spoofing detection method [13]. If an attacker needs to ensure that the direction of the counterfeited signal is the same as or close to that of an authentic signal, the cost of doing so is generally high. In most scenarios, if a receiver can measure the direction of a received carrier, it can easily discriminate between a spoofing signal and an authentic signal.
Analysis of spoofing attacks and anti-spoofing technology indicates that a receiver-spoofer is superior to other spoofing methods in terms of covertness and practicality, and its detection is difficult. Typically, a receiver-spoofer can only be detected by a combination of multiple anti-spoof measures, e.g., a combination of power detection and correlation distortion detection. Therefore, a study of the characteristics and key parameters of this spoofing attack and the analysis of its performance will improve the effectiveness of the anti-spoofing technology for navigation systems and ensure that defending measures are targeted.
Analysis of key parameters that affect the probability of spoofing success
The principle of the receiver code loop [14] is shown in Fig. 2.
The signal received by a receiver undergoes correlation and coherent integration with locally generated early, prompt, and late PN codes to integrate the I and Q branches and calculate the self-correlation amplitudes E, P, and L. Because the navigation signal PN code self-correlation function is symmetric along the y-axis, when a received signal aligns with the local code, E should be equal to L. If the calculated E is unequal to L, the receiver concludes that the local code is misaligned and will generate a phase discrimination result based on the difference between E and L. This difference is adjusted via a numeric control oscillator (NCO) to complete code phase alignment. Therefore, calculating E and L is the key to code phase alignment. The spoofing signal seizes control by affecting this value.
At this moment, E and L are calculated via Formula (1):
$$ {\displaystyle \begin{array}{l}E=\sqrt{I_{\mathrm{E}}^2+{Q}_{\mathrm{E}}^2}= aR\left({\tau}_{\mathrm{E}}\right)\left|\sin c\left({f}_{\mathrm{e}}{T}_{\mathrm{coh}}\right)\right|\\ {}L=\sqrt{I_{\mathrm{L}}^2+{Q}_{\mathrm{L}}^2}= aR\left({\tau}_{\mathrm{L}}\right)\left|\sin c\left({f}_{\mathrm{e}}{T}_{\mathrm{coh}}\right)\right|\end{array}} $$
Of which IE, QE, IL, and QL represent the early/late correlation integration of I/Q branch. a represents the amplitude of signals. R(•) represents the unitized correlation function of PN codes. τE and τL represent the phase difference between early/late local code and received code. fe represents the frequency differences of a local replicate carrier versus received signals. Tcohrepresents the correlation and coherent integration period. E and L are approximately equal.
When an authentic signal and a spoofing signal coexist, the signal at the receiver is a superposition of the two signals.
$$ S(t)={S}_{\mathrm{R}}(t)+{S}_{\mathrm{S}}(t) $$
Of which S(t) represents the received signal. SR(t) represents the authentic signal. SS(t) represents the spoofing signal.
At this moment, the phase discriminator output is shown in Fig. 3.
$$ {\displaystyle \begin{array}{l}K=\sqrt{{\left({I}_{\mathrm{K}}+I{\hbox{'}}_{\mathrm{K}}\right)}^2+{\left({\mathrm{Q}}_{\mathrm{K}}+Q{\hbox{'}}_{\mathrm{K}}\right)}^2}=\\ {}\sqrt{{\left(A\cdot {R}_{\mathrm{K}}\left({\tau}_{\mathrm{R}}\right)\left|\sin c\left({f}_{\mathrm{e}}{T}_{\mathrm{coh}}\right)\right|\right)}^2+{\left(\eta A\cdot {R}_{\mathrm{K}}\left({\tau}_{\mathrm{S}}\right)\left|\sin c\left({f_{\mathrm{e}}}^{\hbox{'}}{T}_{\mathrm{coh}}\right)\right|\right)}^2+2\eta {A}^2{R}_{\mathrm{K}}\left({\tau}_{\mathrm{R}}\right)R\left({\tau}_{\mathrm{S}}\right)\left|\sin c\left({f}_{\mathrm{e}}{T}_{\mathrm{coh}}\right)\right|\left|\sin c\left({f_{\mathrm{e}}}^{\hbox{'}}{T}_{\mathrm{coh}}\right)\right|\cos \left(\phi -{\phi}^{\hbox{'}}\right)}\end{array}} $$
Phase discriminator output when a spoofing signal exists
Here, K represents the early/late non-coherent integration. τR and τS represent the phase differences of the local replicate code versus an authentic signal and a spoofing signal, respectively. RK(τR) and RK(τS) represent the normalized early/late correlation functions for authentic signals and spoofing signals, respectively. A and η represent the amplitude of authentic signals and the ratio of the spoofing signal amplitude to the authentic signal amplitude. fe and fe' represent the frequency differences of a local replicate carrier versus authentic signals and spoofing signals. ϕ and ϕ' represent the carrier phases of authentic signals and spoofing signals.
These parameters have an impact on the spoofing process. In addition to these parameters, the phase discriminator spacing d is also a factor that will affect the calculation of E and L, since it is a parameter of the correlation function R(.), which will be expressed in Formulae (9), (25), and (28). For brevity, noises from the I and Q branches are not included in the formula. Therefore, when loop noise is considered, the signal-to-noise ratio (SNR) is also a factor that impacts the spoofing process.
Considering numerous factors in the spoofing process, analyzing all factors is a complicated task. These parameters can be divided into three types. The first type is determined by the configuration of the target receiver, including the correlation and coherent integration period Tcoh, phase discriminator spacing d, and authentic signal amplitude A. As these parameters are not controllable by a spoofing platform, these parameters are set to typical values, and the investigation is based on typical scenarios.
The second type is determined by the receiver authentic signal lock-in state during a spoofing attack, which includes the frequency difference between an authentic signal and a local replicate carrier (fe) and the phase difference between an authentic signal and local replicate code (τR). A reasonable assumption is as follows: before spoofing, the target receiver steadily tracks an authentic signal. Therefore, fe and τR are approximately equal to 0.
The last type is closely related to a spoofing signal; these parameters include the frequency difference between a spoofing signal and a local replicate carrier (fe'), the phase difference between a spoofing signal and local replicate code (τS), the phase difference between an authentic signal carrier and a spoofing signal carrier (ϕ − ϕ'), and the ratio of the spoofing signal amplitude versus the authentic signal amplitude (η). The spoofing device always prefers that the first three parameters are close to 0, which is very difficult to achieve due to various reasons such as measurement error and device cost. Only η is a controllable parameter for the spoofing device. Therefore, η is the most important parameter for generating a spoofing signal and is the focus of numerous anti-spoofing attack studies. The spoofing-signal ratio (η) is defined as the ratio of the spoofing signal amplitude to the authentic signal amplitude. Based on the above assumption and the parameter analysis method, the lower limit of η required for successful spoofing in various conditions is deduced in the following sections.
Deduction of a formula for the lower limit of the spoofing-signal ratio
When the carrier frequency and phase from the spoofing device align with an authentic signal that is originally locked by the target receiver, the lower limit of the spoofing-signal ratio required for a spoofing device to seize control of a receiver code loop is deduced in reference [15]. We have the following:
When the phase discriminator spacing of the target receiver is equal to 0.5 chip, the lower limit of the spoofing-signal ratio is as follows:
$$ \left\{\begin{array}{l}\begin{array}{cc}\operatorname{inf}\left\{\eta \right\}=1& {\tau}_0\le 1\\ {}\operatorname{inf}\left\{\eta \right\}=\frac{1}{2-{\tau}_0}& 1.5>{\tau}_0>1\end{array}\\ {}\operatorname{inf}\left\{\eta \right\}=\infty \kern0.5em 1.5\le {\tau}_0\end{array}\right. $$
When the phase discriminator spacing of the target receiver is less than 0.5 chip, the lower limit of the spoofing-signal ratio is as follows:
$$ \left\{\begin{array}{l}\begin{array}{cc}\operatorname{inf}\left\{\eta \right\}=1& {\tau}_0\le 1\\ {}\operatorname{inf}\left\{\eta \right\}=\frac{2d}{1+2d-{\tau}_0}& 1+d>{\tau}_0>1\end{array}\\ {}\operatorname{inf}\left\{\eta \right\}=\infty \kern0.5em 1+d\le {\tau}_0\end{array}\right. $$
When the phase discriminator spacing of the target receiver exceeds 0.5 chip, the lower limit of the spoofing-signal ratio is as follows:
$$ \left\{\begin{array}{l}\begin{array}{cc}\operatorname{inf}\left\{\eta \right\}=1& {\tau}_0\le 2d\\ {}\operatorname{inf}\left\{\eta \right\}=\frac{2\left(1-d\right)}{2-{\tau}_0}& 1+d>{\tau}_0>2d\end{array}\\ {}\operatorname{inf}\left\{\eta \right\}=\infty \kern0.5em 1+d\le {\tau}_0\end{array}\right. $$
Here τ0 represents the code phase difference between a spoofing signal and an authentic signal; this error is caused by an inaccurate estimation of the target receiver position by the spoofer. d represents the phase discriminator spacing of the target receiver. inf{η} is the lower limit of the spoofing-signal ratio required for successful spoofing under various τ0.
In a real scenario, a spoofing signal has difficulty aligning with an authentic signal carrier received by the target receiver, or achieving this alignment is extremely expensive, e.g., high precision distance measurement technology (radar) can be employed to measure the relative position of the two signals. Therefore, these conclusions are only meaningful in a laboratory environment and have a very limited reference value for actual spoofing and anti-spoofing practice. This paper focuses on a scenario with a misaligned carrier and analyzes and deduces the lower limit of the spoofing-signal ratio required for successful spoofing.
The mechanism of a receiver-spoofer is as follows: the code phase of a spoofing signal is gradually changed to influence the code loop phase discriminator output and the disrupt receiver lock-in process on an authentic signal; the phase of the receiver local replicate code in the code loop is gradually induced to align with the code phase of the spoofing signal and drift from the authentic signal, after which receiver control is seized [3]. Assume that the spoofing signal code waits for the loop phase discriminator to stabilize before changing phases. Each phase change is referred to as a traction.
To simplify the deduction of the lower limit of the spoofing-signal ratio, assume that the frequencies of the spoofing signal, authentic signal, and receiver local replicate code are identical. This assumption requires that a spoofing device can accurately obtain velocity information about the target receiver, which is achievable in most spoofing scenarios, including a stationary receiver, ships, and steadily moving vehicles. With this assumption, fe and fe' in Formula (3) are approximately equal to 0. Therefore, Formula (3) is simplified as follows:
$$ {\displaystyle \begin{array}{l}{S}_{\mathrm{K}}=\sqrt{{\left({I}_{\mathrm{K}}+I{\hbox{'}}_{\mathrm{K}}\right)}^2+{\left({\mathrm{Q}}_{\mathrm{K}}+Q{\hbox{'}}_{\mathrm{K}}\right)}^2}=\\ {}=\sqrt{{\left(A\cdot R\left({\tau}_{\mathrm{K}}\right)\right)}^2+{\left(\eta A\cdot R\left(\tau {\hbox{'}}_{\mathrm{K}}\right)\right)}^2+2\eta {A}^2R\left({\tau}_{\mathrm{K}}\right)R\left(\tau {\hbox{'}}_{\mathrm{K}}\right)\cos \left(\phi -{\phi}^{\hbox{'}}\right)}\\ {}K=E,L\end{array}} $$
The impact of the frequency difference is removed. The phase discrimination result is directly affected by the code phase difference and the carrier phase difference between an authentic signal and a spoofing signal, as well as the spoofing-signal ratio. Therefore, the lower limit of the spoofing-signal ratio is determined by the other two parameters. The phase discriminator spacing will affect the calculation of the correlation R. In the following sections, the formula for the lower limit of the spoofing-signal ratio for different phase discriminator spacings is discussed.
Phase discriminator spacing of target receiver = 0.5 chip
Assume that the spoofing signal enters the code loop traction range at t0. After the spoofing signal enters the code loop, the code loop attains an equilibrium state at t1. Assume that at t0 and t1, the code phase difference between an authentic signal and a spoofing signal is τ0 and stabilizes. The definition of the code loop equilibrium state is that the early correlation of the code loop is equal to the late correlation, i.e., E = L. In the deduction in Reference [15], an initial conclusion is obtained: at t1, the phase difference between the spoofing signal and the local code is τS(t1); the code phase difference between an authentic signal and the local code is τR(t1); and once τR(t1) < d and τS(t1) > d, spoofing will fail. Assume that the receiver local replicate code aligns with an authentic signal at t0, i.e., τR(t0) = 0. At this moment, the following expressions hold:
$$ \left\{\begin{array}{l}E\left({t}_1\right)=L\left({t}_1\right)\\ {}{\tau}_{\mathrm{S}}\left({t}_1\right)+{\tau}_{\mathrm{R}}\left({t}_1\right)={\tau}_0\end{array}\right. $$
where E(t1) and L(t1) represent the lead correlation and lag correlation at t1; τ0 represents the code phase difference between a spoofing signal and an authentic signal at t0 and t1. The correlation function R(τ) of the PN codes can be expressed in Formula (9) as follows:
$$ \left\{\begin{array}{ccc}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1+d+\tau \right),& -1-d\le \tau <-d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1-d+\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1-d-\tau \right),& -d\le \tau <d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1+d-\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=0,& d\le \tau \le 1+d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)=0,& \mathrm{other}\end{array}\right. $$
where d represents the phase discriminator spacing; τ represents the code phase difference between the signal and the local replicate code.
According to Formula (7), for E and L, as well as Formula (9) for the correlation function R(τ), when τR(t1) < d and τS(t1) > d is substituted into Formula (8), the solution for τR(t1) and τS(t1) is as follows:
$$ \left\{\begin{array}{l}{\tau}_{\mathrm{R}}\left({t}_1\right)=\Big(\pm \sqrt{\left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4}\\ {}-2 d\eta \alpha +2d+d{\eta}^2-{\eta}^2{\tau}_0+{\eta}^2+{\eta \alpha \tau}_0-2\Big)/\left(-{\eta}^2+2\eta \alpha \right)\\ {}{\tau}_{\mathrm{S}}\left({t}_1\right)={\tau}_0-{\tau}_{\mathrm{R}}\left({t}_1\right)\end{array}\right. $$
where α = cos(ϕ − ϕ') represents the cosine of the carrier phase difference between an authentic signal and a spoofing signal. The sign "±" in Formula (10) should be "+" when −η2 + 2ηα is greater than 0, and it should be "−" when −η2 + 2ηα is less than 0. Once the spoofing-signal ratio η of the spoofing signal is in the range for which a solution for τS(t1) and τR(t1) in Formula (10) exists, τR(t1) < d and τS(t1) > d, this spoofing-signal ratio will cause spoofing failure. The solution space S for this condition is calculated, and the complementary set \( \overline{S} \) is calculated to obtain the spoofing-signal ratio required for successful spoofing.
The necessary condition for the τR(t1) solution is that the radical in Formula (10) is greater than or equal to 0.
$$ \left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4\ge 0 $$
To solve inequation (11), first, the solutions for the following equation are obtained:
$$ \left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4=0 $$
The solutions are
$$ \left\{\begin{array}{l}{\eta}_1=\frac{2\alpha \left({\tau}_0-2d+2{d}^2-d{\tau}_0\right)-4\sqrt{3d{\tau}_0-2d-{\tau}_0+2d{\alpha}^2-3{d}^2{\tau}_0+{d}^3{\tau}_0+{\alpha}^2{\tau}_0+2{d}^3-{d}^4-{\alpha}^2-2{d}^3{\alpha}^2+{d}^4{\alpha}^2-3d{\alpha}^2{\tau}_0+3{d}^2{\alpha}^2{\tau}_0-{d}^3{\alpha}^2{\tau}_0+1}}{\alpha^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4}\\ {}{\eta}_2=\frac{2\alpha \left({\tau}_0-2d+2{d}^2-d{\tau}_0\right)+4\sqrt{3d{\tau}_0-2d-{\tau}_0+2d{\alpha}^2-3{d}^2{\tau}_0+{d}^3{\tau}_0+{\alpha}^2{\tau}_0+2{d}^3-{d}^4-{\alpha}^2-2{d}^3{\alpha}^2+{d}^4{\alpha}^2-3d{\alpha}^2{\tau}_0+3{d}^2{\alpha}^2{\tau}_0-{d}^3{\alpha}^2{\tau}_0+1}}{\alpha^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4}\end{array}\right. $$
Second, the coefficient of η2 in Formula (11) is considered:
$$ A={\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4 $$
When A < 0, the solution space of inequation (11) is
$$ {S}_1=\left[\min \left({\eta}_1,{\eta}_2\right),\max \left({\eta}_1,{\eta}_2\right)\right] $$
When A > 0, the solution space of inequation (11) is
$$ {S}_1=\left[-\infty, \min \left({\eta}_1,{\eta}_2\right)\right]\cup \left[\max \left({\eta}_1,{\eta}_2\right),\infty \right] $$
To ensure that τR(t1) < d, inequation (16) holds.
$$ \frac{\pm \sqrt{\left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4}-2 d\eta \alpha +2d+d{\eta}^2-{\eta}^2{\tau}_0+{\eta}^2+{\eta \alpha \tau}_0-2}{-{\eta}^2+2\eta \alpha}<d $$
Similarly, the solution for the equation
$$ \frac{\pm \sqrt{\left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4}-2 d\eta \alpha +2d+d{\eta}^2-{\eta}^2{\tau}_0+{\eta}^2+{\eta \alpha \tau}_0-2}{-{\eta}^2+2\eta \alpha}=d $$
$$ \left\{\begin{array}{l}{\eta}_3=\frac{2 d\alpha -\alpha -\sqrt{4{d}^2{\alpha}^2-4{d}^2-4d{\alpha}^2+4d+{\alpha}^2}}{2d-{\tau}_0+1}\\ {}{\eta}_4=\frac{2 d\alpha -\alpha +\sqrt{4{d}^2{\alpha}^2-4{d}^2-4d{\alpha}^2+4d+{\alpha}^2}}{2d-{\tau}_0+1}\end{array}\right. $$
When τ0 > 1 + d, the spoofing signal cannot enter the phase discriminator traction range, so τ0 ≤ 1 + d. Therefore, the coefficient of η2 in inequation (16) is expressed as follows:
$$ {A}^{\hbox{'}}=2d-{\tau}_0+1>0 $$
The solution space for inequation (16) is expressed as follows:
The spoofing-signal ratio η that leads to spoofing failure should be in S1 ∩ S2; the complementary set \( S=\overline{S_1\cap {S}_2} \) is the spoofing-signal ratio η range for successful spoofing. Based on Formulae (14), (15), and (20), when A = α2τ02 − 4α2τ0 + 4α2 + 4d2 − 4dτ0 + 4τ0 − 4 < 0,
$$ S=\overline{S_1\cap {S}_2}=\left(-\infty, \max \right(\min \left({\eta}_1,{\eta}_2\right),\min \left({\eta}_3,{\eta}_4\right)\left]\cup \right[\min \left(\max \left({\eta}_1,{\eta}_2\right),\max \left({\eta}_3,{\eta}_4\right)\right),+\infty \Big) $$
and when A = α2τ02 − 4α2τ0 + 4α2 + 4d2 − 4dτ0 + 4τ0 − 4 > 0,
$$ S=\overline{S_1\cap {S}_2}=\left(-\infty, \min \left({\eta}_3,{\eta}_4\right)\right]\cup \left[\min \left({\eta}_1,{\eta}_2\right),\max \left({\eta}_1,{\eta}_2\right)\right]\cup \left[\max \left({\eta}_3,{\eta}_4\right),+\infty \right) $$
Formulae (21) and (22) and η > 1 are combined to obtain the lower limit of the spoofing-signal ratio required for successful spoofing.
$$ \left\{\begin{array}{l}\operatorname{inf}\left(\eta \right)=\min \left(\max \left({\eta}_1,{\eta}_2\right),\max \left({\eta}_3,{\eta}_4\right)\right),\kern0.5em \begin{array}{cc} if& {\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4<0\end{array}\\ {}\operatorname{inf}\left(\eta \right)=\max \left(\min \left({\eta}_1,{\eta}_2\right),\min \left({\eta}_3,{\eta}_4\right)\right),\begin{array}{cc}& \begin{array}{cc} if& {\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4>0\end{array}\end{array}\end{array}\right. $$
When d = 0.5 and no carrier phase difference between authentic signals and spoofing signals is observed, i.e., α = cos(ϕ − ϕ') = cos 0 = 1, Formula (23) is simplified as follows:
$$ \operatorname{inf}\left\{\eta \right\}=\frac{1}{2-{\tau}_0} $$
This result matches the conclusion in Reference [15].
Phase discriminator spacing of target receiver < 0.5 chip
When the phase discriminator spacing of the target receiver is under 0.5 chip, the spoofing failure condition is τR(t1) < d and 1 − d < τS(t1) (Reference [15]). Similar to the deduction in Section 3.1, Formula (8) for E and L and the correlation function R(τ) for τR(t1) < d and 1 − d < τS(t1) (Formula (25) are substituted into Formula (8):
$$ \left\{\begin{array}{ccc}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1+d+\tau \right),& -1-d\le \tau <d-1\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1-d+\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1+d+\tau \right),& d-1\le \tau <-d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1-d+\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1-d-\tau \right),& -d\le \tau \le d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1+d-\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=A\left(1-d-\tau \right),& d<\tau \le 1-d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=A\left(1+d-\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=0,& 1-d<\tau \le 1+d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)=0,& \mathrm{other}\end{array}\right. $$
The solution for τR(t1) is as follows:
Similar to the solution in Section 3.1 for τR(t1), at this moment, the formula for the lower limit of the spoofing-signal ratio is similar to Formula (23).
Similarly, when no carrier phase difference between an authentic signal and a spoofing signal is observed, i.e., α = cos(ϕ − ϕ') = cos 0 = 1, the formula for the lower limit of the spoofing-signal ratio is simplified as follows:
$$ \operatorname{inf}\left\{\eta \right\}=\frac{2d}{2d-{\tau}_0+1} $$
Phase discriminator spacing of target receiver > 0.5 chip
When the phase discriminator spacing of the target receiver exceeds 0.5 chip, the spoofing failure condition is τR(t1) < 1 − d and d < τS(t1) (Reference [15]). Similar to the deduction in Section 3.1, Formula (7) for E and L and the correlation function R(τ) for τR(t1) < 1 − d, d < τS(t1) (Formula (28)) are substituted into Formula (8):
$$ \left\{\begin{array}{ccc}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)={A}_{\mathrm{s}}\left(1+d+\tau \right),& -1-d\le \tau <-d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)={A}_{\mathrm{s}}\left(1-d-\tau \right),& -d\le \tau <d-1\\ {}{R}_{\mathrm{E}}\left(\tau \right)={A}_{\mathrm{s}}\left(1-d+\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)={A}_{\mathrm{s}}\left(1-d-\tau \right),& d-1\le \tau \le 1-d\\ {}{R}_{\mathrm{E}}\left(\tau \right)={A}_{\mathrm{s}}\left(1-d+\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=0,& 1-d<\tau \le d\\ {}{R}_{\mathrm{E}}\left(\tau \right)={A}_{\mathrm{s}}\left(1+d-\tau \right),& {R}_{\mathrm{L}}\left(\tau \right)=0,& d<\tau \le 1+d\\ {}{R}_{\mathrm{E}}\left(\tau \right)=0,& {R}_{\mathrm{L}}\left(\tau \right)=0,& \mathrm{other}\end{array}\right. $$
This finding is similar to the solution for τR(t1) in Section 3.1. However, the condition changes to τR(t1) < 1 − d and d < τS(t1). Therefore, the solutions for inequation (30) are required.
$$ \frac{\pm \sqrt{\left({\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4\right){\eta}^2+4d{\eta \alpha \tau}_0+8 d\eta \alpha -4{\eta \alpha \tau}_0-8\eta {d}^2\alpha +4{d}^2-8d+4}-2 d\eta \alpha +2d+d{\eta}^2-{\eta}^2{\tau}_0+{\eta}^2+{\eta \alpha \tau}_0-2}{-{\eta}^2+2\eta \alpha}<1-d $$
$$ \left\{\begin{array}{l}{\eta}_3=\frac{2-2d}{2-{\tau}_0}\\ {}{\eta}_4=\frac{2d-2}{2-{\tau}_0}\end{array}\right. $$
Because the phase discriminator spacing d > 1 is meaningless, d < 1 and τ0 ≤ 1 + d < 2. Therefore, η3 > η4 and Formula (23) is rearranged as follows:
$$ \left\{\begin{array}{l}\operatorname{inf}\left(\eta \right)=\min \left(\max \left({\eta}_1,{\eta}_2\right),{\eta}_3\right),\kern0.5em \begin{array}{cc}\mathrm{if}& {\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4<0\end{array}\\ {}\operatorname{inf}\left(\eta \right)=\max \left(\min \left({\eta}_1,{\eta}_2\right),{\eta}_4\right),\begin{array}{cc}& \begin{array}{cc}\mathrm{if}& {\alpha}^2{\tau_0}^2-4{\alpha}^2{\tau}_0+4{\alpha}^2+4{d}^2-4d{\tau}_0+4{\tau}_0-4>0\end{array}\end{array}\end{array}\right. $$
Similarly, when no carrier phase difference is observed between an authentic signal and a spoofing signal, i.e., α = cos(ϕ − ϕ') = cos 0 = 1, the formula for the lower limit of the spoofing-signal ratio is simplified as follows:
$$ \operatorname{inf}\left\{\eta \right\}=\frac{2-2d}{2-{\tau}_0} $$
This finding matches the conclusion in Reference [15].
Based on the initial conclusions in Reference [15], when the carrier phases of authentic and spoofing signals are misaligned, the lower limit of the spoofing-signal ratio required for successful spoofing is deduced. When the carrier phases of authentic and spoofing signals are aligned, the formula for the lower limit matches the conclusion in Reference [15]. In the next section, the validity of these conclusions is verified via testing.
Test verification
In this section, the formula for calculating the spoofing-signal ratio, which was derived in the previous sections will be verified in testing. GPS signal is the most popular and matured navigation system, so we select GPS signal and GPS receiver as the testing signal and device. The test includes two parts. The first part is a test using a GNSS signal generator, which will be repeated 100 times. The lower limit of the spoofing-signal ratio in each test will be recorded. Then the highest value in the statistic of these 100 results will be compared with the theoretical result. In the second part, a GPS receiver collects and stores authentic signal with a length of 1 h. Then randomly select 100 starting points to generate 100 testing data with the length of 1 ms each. The spoofing-signal ratio lower limits of these 100 testing samples are obtained. The highest value of them will be compared with the theoretical result. Below are the detailed introduction of the two parts of the test, as well as the analysis of the results of the testing.
Verification via a signal generated by a signal simulation source
The GNSS signal simulation source can accurately control the navigation signal SNR, the spoofing-signal ratio of an authentic signal to a spoofing signal, the carrier phase difference, and the code phase difference between the two signals. Therefore, the signal simulation source is employed to generate the data required in the test. In each test, the simulation source generates two signals: an authentic signal and a spoofing signal. The two signals have identical frequencies. The carrier phase difference between the two signals is set to 0.1, 0.2, 0.3, 0.4, and 0.5 (unit, radian). The code phase difference between the two signals increases from 0~1+d chip (d represents the receiver phase discriminator spacing); the increase step size is 0.01 chip. To verify the results for different phase discriminator spacings, Matlab software receiver is employed. The phase discriminator spacing is set to 0.3 (< 0.5), 0.5 (equal to 0.5), and 0.7 (above 0.5) chip. The test procedure is as follows:
The simulation source generates an authentic signal; the software receiver exports stable and accurate positioning results.
The power difference between the spoofing signal and authentic signal, the carrier phase difference, and the code phase difference between the two signals are configured to generate the spoofing signal.
The code phase difference between the authentic signal and spoofing signal is gradually adjusted to separate the two signals. The spoofing signal is evaluated to determine if it can seize control from a receiver. The criterion for successful control seizure is as follows: the distance between the code phase of the authentic signal and the code phase of the receiver local replicate signal exceeds 1 + d; the distance between the code phase of the spoofing signal and the code phase of the receiver local replicate signal is less than 1+d. If spoofing is successful, go to step 4. Otherwise, increase the power of the spoofing signal, and repeat step 3.
Record the power difference between the spoofing signal and the authentic signal. Increase the code phase difference between the two signals by 0.01 chip; reset the power difference to 0, and repeat step 2 until the code phase difference is equal to 1+d. Go to step 5.
Increase the carrier phase difference between the two signals by 0.1 rad; reset the code phase difference to 0, and repeat step 2 until the carrier phase difference is equal to 0.6 rad. Go to step 6.
Repeat the above test for 100 times. Record and study the highest value of the power difference lower limits in these 100 tests. Convert the spoofing-signal ratio calculated by Formulae (24) and (33) to the power difference and compare it with the recorded power difference to verify the validity of the formula.
Verification via an authentic navigation signal
To prove the validity and practicality of Formulae (24) and (33), an authentic GPS navigation signal collected by the receiver is employed for verification. The sampling rate is set to 14 MHz, and the intermediate frequency is set to 3 MHz. One-hour-long authentic navigation signal of a GPS satellite is collected and recorded. Randomly selects 100 data with 1-ms length each in the recorded authentic navigation signal. The data are processed via Matlab software to extract information including the carrier phase, code phase, navigation message, and Doppler shift. The carrier, pseudo code, and navigation message in the signal are separated, and the carrier phase and code phase are altered and recombined with the noise to recreate the spoofing signal, for which the signal-to-noise ratio is set to 10 dB. Similar to the test procedure in Section 4.1, the carrier phase difference between the authentic signal and the spoofing signal is set to 0.1, 0.2, 0.3, 0.4, and 0.5 (unit, radians). The code phase difference between the two signals gradually increases from 0~1+d chip (d represents the receiver phase discriminator spacing); the step size of this increase is 0.01 chip. The phase discriminator spacing of Matlab software receiver is set to 0.3 (< 0.5), 0.5 (= 0.5), and 0.7 chip (> 0.5). The minimum power required for successful spoofing is recorded, and the highest value of the power difference lower limits in these 100 tests is studied and is compared with the results calculated by Formulae (24) and (33).
The result of the signal simulation source
The test results obtained by using the GNSS signal generator are shown in Figs. 4, 5 and 6.
Theoretical value versus simulation data when phase discriminator spacing = 0.5 chip. a Overall diagram. b Zoomed-in diagram
Theoretical value versus simulation data when phase discriminator spacing = 0.3 chip
Figures 4, 5 and 6 show the results of the theoretical value versus the simulation value when the phase discriminator spacing is 0.5 chip, 0.3 chip, and 0.7 chip. In the three zoomed-in diagrams, the error between the theoretical value and the simulation value is always under 0.1 dB in the three scenarios. The theoretical results are close to the simulation results, thereby demonstrating the validity of Formulae (24) and (33).
The result of the authentic navigation signal
The test results obtained by using the authentic navigation signal are shown in Figs. 7, 8 and 9.
Theoretical value, simulation data, and authentic signal alignment results when phase discriminator spacing = 0.5 chip. a Overall diagram. b Zoomed-in diagram
Figures 7, 8 and 9 show the theoretical value, the simulation data, and the authentic signal alignment results when the discriminator spacing is equal to 0.5 chip, 0.3 chip, and 0.7 chip. These three figures show theoretical values minus real values measured by GPS. After adding noise, the difference between the theoretical value and the measured data increases, but the maximum values in these three cases do not exceed 2 dB and trend toward convergence. Using Formulae (24) and (33) to calculate the results of the lower limit of the spoofing-signal ratio provides strong guidance in actual attack scenarios. In addition, the influence of noise can change the lower limit of the spoofing-signal ratio, inducing it not only to grow in one direction but also to fluctuate around the theoretical value. This is because noise affects not only the spoofing signal but also the authentic signal. Under certain conditions, noise may make the control loop capture the spoofing signal more easier than capture the authentic signal.
From the results of these theoretical calculations, the simulation results and the measured results, all of the corresponding values τ0 when the lower limit of the spoofing-signal ratio changes from zero to non-zero are the same. When the discriminator spacing is less than or equal to 0.5 chip, the critical point falls in the vicinity of 1 chip. When the discriminator spacing is greater than 0.5 chip and the critical point is in the vicinity of 2× spacing, the critical point has no effect on the carrier phase difference between two signals. Therefore, while the spoofer estimates the position of the target sufficiently accurately and the code phase of the spoofing signal and the authentic signal is sufficiently close, the spoofer can successfully capture control of the target machine when the power of the spoofing signal is slightly greater than that of the authentic signal.
In these experiments, our discussions are limited to situations with carrier phase differences less than or equal to 90°. This is because when the carrier phase difference between the spoofing and authentic signals is within the range (90°, 180°), these two signals are no longer in superposition and weaken each other. In this scenario, the conclusions of Chapter 3 are no longer accurate, and the lower limit of the spoofing-signal ratio calculated from Formulae (24) and (33) is no longer applicable.
A receiver-spoofer is a highly covert and hazardous GNSS navigation spoofing attack method. In this paper, the influencing parameters of spoofing are analyzed, and the results indicate that the spoofing-signal ratio is a critical parameter in a spoofing attack. The lower limits of the spoofing-signal ratio required for successful spoofing under various receiver phase discriminator spacings, carrier phase differences, and code phase differences between authentic signals and spoofing signals are obtained via detailed deduction. Verification via a signal simulation source and an authentic navigation signal proves that the formula for the lower limit is accurate and valid. This finding provides a basis for the future study of anti-spoofing technologies.
Based on this study, parameters such as the frequency difference between authentic signals and spoofing signals and SNR will be investigated to identify a more effective method for spoofing-signal detection and elimination.
BDS:
Beidou Navigation System
CSNC:
China Satellite Navigation Conference
Global navigation satellite system
Global positioning system
LSP:
Leadership Scholarship Program
NCO:
Numeric control oscillator
NRSCC:
National Remote Sensing Center of China
SNR:
Signal-to-noise ratio
JS Warner, RG Johnston, A simple demonstration that the global positioning system (GPS) is vulnerable to spoofing. J. Secur. Adm (2003)
P Papadimitratos, A Jovanovic, Protection and fundamental vulnerability of GNSS (IWSSC, Toulouse, 2008)
T. E. Humphreys, B. M. Ledvina, M. L. Psiaki, B. W. O. Hanlon and P. M. Kintner, Assessing the spoofing threat: development of a portable GPS civilian spoofer, in Institute of Navigation GNSS (ION GNSS 2008), pp. 2314–25,(Savanna, GA, 2008).
Shepard D, Humphreys T, Characterization of receiver response to a spoofing attack. Proceedings of. ION GNSS 2011, pp.2608–18, (Portland, OR, 2011).
T. Humphreys, P. Kintner, Jr., M. Psiaki, B. Ledvina, and B. O'Hanlon, Assessing the spoofing threat (GPS World, 20(28),2009).
D. P. Shepard, T. E. Humphreys, and A. A. Fansler, Going up against time: the power grid's vulnerability to GPS spoofing attacks. GPS World. Vol.22, pp.34–38, (2012).
D. Shepard, J. Bhatti, and T. Humphreys, Drone hack: spoofing attack demonstration on a civilian unmanned aerial vehicle, GPS World, Vol.23, pp.30–.33. (2012).
J. A. Bhatti, T. E. Humphreys, Covert control of surface vessels via unpredictable civil GPS signals, available online at http://radionavlab.ae.utexas.edu/publications/375-covert-control-of-surface-vessels-via-counterfeit-civil-gps-signals, 2014.
A Jafarnia-Jahromi, A Broumandan, J Nielsen, G Lachapelle, Pre-despreading authenticity verification for GPS L1 C/a signals. Navigation 61(1), 1–11 (2014)
P. Levin, D. De Lorenzo, P. Enge, and S. Lo, Authenticating a signal based on an unknown component thereof, (U.S. Patent No. 7,969,354B2, 2011).
BW O'Hanlon, ML Psiaki, JA Bhatti, DP Shepard, TE Humphreys, Real-time GPS spoofing detection via correlation of encrypted signals. Navigation 60(4), 267–78 (2013)
C. Tanil, S. Khanafseh, and B. Pervan, GNSS spoofing attack detection using aircraft autopilot response to deceptive trajectory, in Proc. ION GNSS+, (Tampa, FL, 2015).
M. L. Psiaki, S. P. Powell, and B. W. O'Hanlon, GNSS spoofing detection using high-frequency antenna motion and carrier-phase data, in Proc. of the International Technical Meeting of the Satellite Division of the Institute of Navigation Conference ION GNSS+, (Nashville, TN, 2013), pp. 2949–91.
A Alaqeeli, J Starzyk, F Van Grass, in Proceedings of the 2003 International Symposium on Circuits and Systems, Vol.4. Real-time acquisition and tracking for GPS receivers (2003)
Meng Zhou, Ying Liu, Lin Xie, Hong Li, Mingquan Lu, Peng Liu, Performance Analysis of Spoofing-signal Ratio for Receiver-Spoofer. In Proceedings of the 2017 International Technical Meeting of The Institute of Navigation(ION GNSS 2011), (Monterey, California, 2017), pp. 898–911.
The authors would like to thank the reviewers for the very helpful comments.
This work was supported by the National Natural Science Foundation of China (Grant No. 61571255).
The datasets supporting the conclusions of this article are private, and it came from the Department of Electrical Engineering, Tsinghua University, Beijing, China.
Department of Electronic Engineering, Tsinghua University, Beijing, China
Meng Zhou, Hong Li & Mingquan Lu
Meng Zhou
Hong Li
Mingquan Lu
Dr. Zhou is the project leader of the ground simulation system for Beidou Navigation System (BDS). Shecompleted the derivation of the main formula and the writing of the main contents of the paper. Dr. Li hasfinished the test verification in this paper. Prof. Lu has given his ideas in Sec. 2. All authors read and approved the final manuscript.
Correspondence to Meng Zhou.
Meng Zhou was born in 1980. She joined Beijing Satellites Navigation Center and has been an engineer since December 2006. Then, she is a PhD candidate in the Department of Electronic Engineering, Tsinghua University, Beijing, China. She has been the project leader of the ground simulation system for Beidou Navigation System (BDS). Her research interests include simulation of satellite navigation system and anti-spoofing technology of satellite navigation system.
Hong Li was born in 1981. He received the BS degree (with honors) from Sichuan University, Chengdu, China, in 2004, and the PhD degree (with honors) from Tsinghua University, Beijing, China, in 2009. Then, he joined the Department of Electronic Engineering of Tsinghua University and has been an associate professor since August 2014. He leads the research of GNSS security in the GNSS lab of the department, including spoofing, anti-spoofing, performance evaluation of signals, and the associated signal processing techniques. He has received the Academic Young Talent of Tsinghua University for young faculties, innovation foundation for Young Talents of National Remote Sensing Center of China (NRSCC), Leadership Scholarship Program (LSP) of Committee of 100, several excellent paper awards for young scholars of China Satellite Navigation Conference (CSNC), outstanding PhD graduate award, excellent doctoral dissertation award, and top 10 outstanding graduate students of Tsinghua University.
Mingquan Lu was born in 1965. He received the MS degree in Electronic Engineering from the University of Electronic Science and Technology of China, Chengdu, China, in 1993. He joined the Department of Electronic Engineering of Tsinghua University, Beijing, China, in 2003 and is currently a professor and the director of the Institute of Information System. His research interests include signal processing, simulation of satellite navigation system, local area navigation system, and software-defined receiver.
No conflict of interest exists in the submission of this manuscript, and the manuscript is approved by all authors for publication. I would like to declare on behalf of my co-authors that the work described was original research that has not been published previously, and not under consideration for publication elsewhere, in whole or in part. All the authors listed have approved the manuscript that is enclosed.
Zhou, M., Li, H. & Lu, M. Calculation of the lower limit of the spoofing-signal ratio for a GNSS receiver-spoofer. J Wireless Com Network 2018, 44 (2018). https://doi.org/10.1186/s13638-018-1048-y
Spoofing-signal ratio
Lower limit
Carrier phase discriminator
Signal simulation
Radar and Sonar Networks
|
CommonCrawl
|
Research | Open | Published: 24 June 2017
An isolated cellulolytic Escherichia coli from bovine rumen produces ethanol and hydrogen from corn straw
Jian Pang1,
Zhan-Ying Liu1,
Min Hao1,
Yong-Feng Zhang1,2 &
Qing-Sheng Qi3
Biotechnology for Biofuelsvolume 10, Article number: 165 (2017) | Download Citation
Lignocellulosic biomass is the most abundant resource on earth. Lignocellulose is mainly composed of cellulose, hemicelluloses, and lignin. The special construction of three kinds of constituents led to the prevention of effective degradation. The goal of this work was to investigate the great potentials of bovine rumen for novel cellulolytic bacterial isolation, which may be used for chemicals and biofuel production from lignocellulose.
A cellulolytic strain, ZH-4, was isolated from Inner Mongolia bovine rumen. This strain was identified as Escherichia coli by morphological, physiological, and biochemical characteristics and 16S rDNA gene sequencing. The extracellular enzyme activity analysis showed that this strain produces extracellular cellulases with an exoglucanase activity of 9.13 IU, an endoglucanase activity of 5.31 IU, and a β-glucosidase activity of 7.27 IU at the pH 6.8. This strain was found to produce 0.36 g/L ethanol and 4.71 mL/g hydrogen from corn straw with cellulose degradation ratio of 14.30% and hemicellulose degradation ratio of 11.39%.
It is the first time that a cellulolytic E. coli was isolated and characterized form the bovine rumen. This provided a great opportunity for researchers to investigate the evolution mechanisms of the microorganisms in the rumen and provided great chance to produce biofuels and chemicals directly from engineered E. coli using consolidated bioprocess.
Lignocellulosic biomass is the most abundant carbohydrate in nature, and also one of the most important renewable resources [1]. Lignocellulose consists mainly of cellulose (35–50%), hemicellulose (25–30%), and lignin (25–30%) [2]. As the main component of lignocellulose, cellulose is made up of linear chains of 1,4-β-linked glucosyl residues and enwrapped by hemicellulose and lignin, which prevents the effective degradation [3, 4].
Among cellulose degradation technology, enzymatic hydrolysis is an environmentally friendly and effective technique. However, the lack of enzymes or microbes which can efficiently deconstruct plant polysaccharides represents a major bottleneck. The cellulolytic microorganisms inhabit in a wide range of niche, such as soil, compost piles, decaying plant materials, rumens, sewage sludge, forest waste piles, and wood processing plants [5]. Therefore, a lot of researchers have tried to isolate high efficient cellulase-producing microorganisms worldwide in different niches [6, 7]. Although fungi and actinomyces played a major role in the degradation of plant materials in natural niche, cellulolytic bacteria have their unique properties compared to fungi and actinomyces, which provides us a variety of cellulase selections for the biofuel and biorefining industry [8, 9]. So far, different kinds of cellulolytic bacteria were isolated from different environments, such as anaerobic bacteria Acetivibrio, Bacteroides, Clostridium, Ruminococcus and aerobic bacterium Bacillus sp., Cellulomonas, Cellvibrio, Microbispora, Thermomonospora sp., Cellulomonas sp., Pseudomonas, Nocardiopsis sp., Cellulomonas sp. [9]. Aerobic bacteria secrete individual, extracellular enzymes with binding modules for various cellulose conformations and the enzymes act in synergy to degrade cellulose. However, anaerobic bacteria produce a unique extracellular multi-enzyme complex, cellulosome, to hydrolyze cellulose effectively [10]. It was supposed that the latter showed a higher degradation efficiency and conversion rate for lignocellulose [11]. The microorganisms in rumen anaerobic ecosystem are more diverse and many enzymatic activities are involved, such as cellulase, xylanase, β-glucanase, pectinase, amylase, protease, and so on [12,13,14]. However, few members of this complex community have been isolated and cultivated in vitro because of the sensitivity to oxygen and special nutritional requirements [15]. To avoid the difficulties in isolation and cultivation, some uncultured methods based on molecular biology have been carried out, such as metagenomics, genomic library, full-cycle rRNA analysis, and fingerprinting [16]. Metagenomics is applied as a high-throughput next-generation sequencing technologies to characterize microorganism communities in the absence of culture [17]. Biomass-degrading genes and genomes have been focused to investigate the function and degradation mechanisms of these ruminal cellulolytic microorganisms by metagenomics [18]. Analyses of metagenomics and the 16S pyrotag library of the rice straw-adapted microbial consortia showed that the phylum Actinobacteria was the main group, and approximately 46.1% of CAZyme genes were from actinomycetal communities [19]. A novel encoded bifunctional xylanase/endoglucanase gene, RuCelA, was cloned from the metagenomics library of yak rumen microorganisms [20]. However, pure microorganism can't be obtained by metagenomics, and thus the intensive studies on evolutionary process and characteristics of microorganisms were hindered. As a result, researchers have tried to directly isolate cellulolytic microorganisms from rumen. For example, Enterobacter cloacae WPL 214 was isolated from bovine rumen fluid waste of Surabaya, Indonesia and Sphingobacterium alimentarium was isolated from buffalo rumen fluid [21, 22]. A novel Clostridiaceae (AN-C16-KBRB) with bifunctional endo-/exo-type cellulase was isolated from the bovine rumen at Jeongeup in Korea, while Treponema JC4 was isolated from bovine rumen in the semiarid tropics of Australian [23, 24].
In this study, an Escherichia coli strain with cellulose-degrading and hemicellulose-degrading activity was isolated from Inner Mongolia bovine rumen for the first time. Three kinds of cellulase activities, exoglucanase, endoglucanase, and β-glucosidase activities were detected in the fermentation supernatant under the condition of neutral buffered solution, which indicated that this strain has special properties with regards to cellulose degradation in comparison to normal E. coli. The production of ethanol and hydrogen from corn straw was also investigated.
Screening of cellulolytic bacteria from bovine rumen
The sample was taken from bovine rumen and was enriched in a medium with Whatman filter paper as sole carbon source. After 6 days, the filter paper was found to be degraded efficiently, indicating the presence and enrichment of cellulolytic microorganisms in the sample (Fig. 1a). Then the samples were transferred 3 times to remove the non-cellulose-degrading microorganisms. The enriched samples were diluted and then spread in strict anaerobic Hungate roll tubes that contain the cellulose and Congo red. The anaerobic cellulolytic microorganisms should grow under strict anaerobic condition and degrade cellulose to form a transparent zone. The cellulose-degrading strains MA-1, WH-2, PJ-3, ZH-4, HM-5, which formed a clear zone in the tube, were picked, respectively (Fig. 1b). Strain ZH-4, which has the biggest ratio of transparent circle and a colony diameter of 6.8 mm/mm, was further purified with Hungate roll tubes and was used for detailed characterization and study.
The result of screening (a) and verification (b)
Identification and characterization of isolated cellulose-degrading strain
The colony characteristics of ZH-4 were round, smooth, wet, milkness. The edge of the colony is regular. Gram staining showed that this strain belonged to Gram-negative bacterium with a size of 0.87 × 1.63 μm. The transmission microscope is shown in Fig. 2.
Transmission electron micrograph of ZH-4
ZH-4 presents an optimum growth temperature of 37 °C and optimum pH of 7.0 (data not shown). The physiological and biochemical characteristics of ZH-4 were analyzed (Table 1), which indicated that this strain may belong to enteric bacteria, Escherichia coli. Since common E. coli did not degrade cellulose, to further confirm our result, the strain ZH-4 was identified again with Biolog MicroPlate reader (ICN Flow Titertek® Multiskan Plus, Version 2.03, LabSystems, Finland), which also showed the highest similarity to E. coli. Then we extract and sequenced the 16S rDNA (1444 bp, Genbank accession number KU058643) of strain ZH-4 and blasted using BLAST search program in the GenBank of NCBI, which revealed 99% sequence identity with E. coli. The relationship between the strain ZH-4 and the most close taxonomic species based on 16S rDNA sequences was described in the phylogenetic tree (Fig. 3). The genome sequencing results showed that ZH-4 belonged to E. coli. The genome size was about 5.28 Mb with GC% 50.69. NR annotation also matched with E. coli. Strain ZH-4 had been preserved in China General Microbiological Culture Collection Center (CGMCC). The preservation number of CGMCC was No. 12427.
Table 1 Physiological and biochemical characteristics of ZH-4
Phylogenetic tree of 16S rDNA gene sequence. The evolutionary history was inferred using the neighbor-joining method. The bootstrap consensus tree inferred from 1000 replicates was taken to represent the evolutionary history of the taxa analyzed. Branches corresponding to partitions reproduced in less than 50% bootstrap replicates were collapsed. The percentage of replicate trees in which the associated taxa clustered together in the bootstrap test (1000 replicates) was shown next to the branches. The evolutionary distances were computed using the Maximum Composite Likelihood method and were in the units of the number of base substitutions per site. Evolutionary analyses were conducted using MEGA 7 software
All these experiments confirmed that the strain ZH-4 is enteric bacteria, E. coli. On the other hand, our experiment results also clearly indicated that ZH-4 can degrade cellulose, while the typical E. coli strain MG1655 and W3110 cannot at the same condition (data not shown). To further confirm this phenomenon, the enzymatic activities of the fermentation fluid with regard to the cellulose degradation were analyzed.
Confirmation of the cellulolytic property by enzymatic assay
Since no cellulosome structure was found on the surface of bacteria via TEM, the endoglucanase, exoglucanase, and β-glucosidase activities in fermentation supernatant were analyzed both under aerobic and anaerobic conditions (Fig. 4). From results, we found that strain ZH-4 possesses all endoglucanase, exoglucanase, and β-glucosidase activities, which explained why this strain can degrade cellulose efficiently. The extracellular enzyme activities increased along with the cultivation time (Fig. 5). The highest enzyme activities, an exoglucanase activity of 8.54 IU, an endoglucanase activity of 4.92 IU, and a β-glucosidase activity of 6.98 IU were found on the 7th day. Meanwhile, we found that the enzyme activities are higher under anaerobic condition than that under aerobic condition that also fits to the rumen environments. The highest enzyme activities appeared at neutral conditions of pH 6.8. At this condition, an exoglucanase activity of 9.13 IU, an endoglucanase activity of 5.31 IU, an β-glucosidase activity of 7.27 IU were found under anaerobic condition at pH 6.8, while an exoglucanase activity of 5.35 IU, an endoglucanase activity of 0.32 IU, and a β-glucosidase activity of 2.15 IU were found under aerobic condition. At pH 5.8, all enzyme activities were low, while at pH 7.8 the exoglucanase was still quite high. This indicated that the exoglucanase is alkali-resistant. The enzyme activity assay demonstrated that ZH-4 do generate soluble cellulose-degrading enzymes. However, the enzyme generation and secretion mechanism shall be investigated further.
Cellulase activity under anaerobic condition with different pH (a); cellulase activity under aerobic condition with different pH (b)
Cellulase activity with cultivation time
Production of ethanol and hydrogen from corn straw
To evaluate the fermentation performance and biofuels and chemicals production potential from biomass of ZH-4, crystallized cellulose Avicel was initially tried. After cultivation optimization, the fermentation condition was determined to be 37 °C and pH 6.8 under anaerobic conditions. At this condition, the fermentation broth became muddy, the color changed dark, and the amount of crystallized cellulose Avicel was reduced gradually; a possible explanation of this phenomenon was that the crystallized cellulose Avicel had been solubilized by the strain ZH-4. Cellulose degradation ratio, ethanol titer, and hydrogen accumulation were analyzed (Fig. 6). The cellulose degradation ratio (utilization in absolute concentration) was 6.06% (0.30 g/L), 15.99% (0.79 g/L), 26.31% (1.30 g/L), 29.10% (1.44 g/L), and the hydrogen production was 0.83, 1.43, 3.62, 4.71 mL/g-Avciel, respectively, after 1 to 7 days of cultivation. The highest hydrogen accumulation was observed on 7th day. However, ethanol was not detected. This result proved that strain ZH-4 can produce biofuels directly from cellulose.
Fermentation of ZH-4 with Avicel as substrate
To further evaluate the actual performance of the strain from biomass, the corn straw, which consists of 36.3% glucan and 14.9% xylan and was used as substrate to investigate the ethanol and hydrogen production potential (Fig. 7). The corn straw was found to be efficiently degraded in the presence of only strain ZH-4, although hemicellulose enzyme activity was not detected in the pre-experiments (data not shown). The cellulose degradation ratio (utilization in absolute concentration) was 4.16% (0.23 g/L), 5.60% (0.31 g/L), 12.15% (0.66 g/L), 14.30% (0.78 g/L) and the hemicellulose degradation ratio was 2.41% (0.05 g/L), 3.85% (0.09 g/L), 8.16% (0.18 g/L), 11.39% (0.26 g/L) after 1, 3, 5, and 7 days of fermentation. The color of fermentation broth showed to be more yellow after 5 days of cultivation, which was probably related to the biological decomposition of corn straw. According to the analysis of reducing sugars, the corn straw was continuously degraded throughout the 7-day period. The final ethanol titer was 0.36 g/L and the final hydrogen production was 3.31 mL/g-corn straw with 15.32% cellulose and 11.39% hemicellulose were degraded after 7 days of cultivation. As a new microbial resource, this strain can be optimized to produce more ethanol and H2 through metabolic engineering and process optimization.
Fermentation of ZH-4 with Corn straw as substrate
Many previous isolated cellulolytic bacteria from different habitats are strictly anaerobic, for example, Bacteroides, Clostridium, Ruminococcus [9]. It was the first time that a cellulolytic E. coli was found in the bovine rumen. Since E. coli was previously not found with regards to its cellulose degradation capability [25], several purification steps were performed to ensure the purity of the isolates. The anaerobic cellulose-based screening also removed the non-cellulolytic bacteria and many possible contaminations. Meanwhile, the isolated ZH-4 was again detected with respect to their enzyme activity for confirmation of our result. 16S rDNA and genome sequencing analysis not only confirmed the purity of the isolate but also revealed 99% sequence identity with E. coli.
In previous studies, a cellulase-producing Enterobacter cloacae WPL 214 was isolated from bovine rumen fluid waste. The endoglucanase activity of 0.009 U/mL, exoglucanase of 0.13 U/mL, and cellobiase of 0.10 U/mL were obtained at optimum temperature of 35 °C and optimum pH of 5 [21]. Cellulase-producing Sphingobacterium alimentarium isolated from buffalo rumen fluid had an extracellular cellulase activity of 1.363 IU/mL [22]. Besides, a microaerophilic bacillus licheniformis with a high cellulose-degrading activity was also isolated from bovine rumen. This indicated that even in bovine rumen, microaerobic or facultative bacteria also exist. It had been reported that rumen bacteria degraded hemicellulose from bromegrass by pure cultures of Bacteroides succinogenes A3c, S-85, and Ruminococcus albus 7; the hemicellulose degradation ratio were 54.0, 77.3, and 60.9%, respectively [26]. The rumen bacteria were known to produce ethanol and hydrogen via fermenting lignocellulosic materials under anaerobic condition [27]. For Ruminobacter albus and Ruminococcus flavefaciens, hydrogen and ethanol are major end metabolites under anaerobic conditions [7]. The rumen anaerobic fungus Neocallimastix frontalis also produces ethanol and hydrogen as end products [28]. It is well known that wild-type Escherichia coli neither produces endogenous cellulose degradation enzymes nor secretes heterologous cellulases due to its poor secretory capacity, and there is no report that wild-type E. coli expresses cellulase and possesses cellulose-degrading and hemicellulose-degrading ability [29].
Escherichia coli often was used as a cell factory for the production of biofuels and chemicals on account of its many available expression and regulation tools [30]. Native E. coli was to produce biochemicals by expressing heterologous cellulases [29, 31]. However, the novel E. coli, ZH-4, not only had a self-contained complete cellulase system but also secreted these enzymes outside the cells. And also, the endoglucanase genes (Genbank accession number KY965823), β-glucosidase genes (Genbank accession number KY965824), and hydrogenase genes (Genbank accession number KY965825) were identified in the draft genome. The genes that are responsible for the production of cellulases and hydrogen had been found. This new characteristics are supposed to be evolved from its special niche in the rumen. Hence, we speculated that the cellulase genes may come from horizontal gene transfer in the rumen ecology system. The cellulase genes in Enterococcus faecalis were thought to be from horizontal gene transfer [32]. The gene transfer of an endoglucanase (celA) from Fibrobacter succinogenes of rumen bacteria to the rumen fungi Orpinomyces joymii was also reported [33]. These examples indicated that the rumen, as a special environment, has the special condition to obtain the unique genes from other bacteria or have special mechanisms to regulate gene expression system. However, we also cannot exclude the possibility that the potential cellulase gene in wild-type E. coli was activated due to some reasons. To demonstrate that, the complete sequencing is to be done, and regulation analysis should be performed to strain ZH-4. Since E. coli was normally used as cell factory, it may be of great interest for the production of biofuels and chemicals using the consolidated process or single cell refinery from biomass [34].
In this study, a cellulose-degrading E. coli strain ZH-4 was isolated from the rumen of Inner Mongolia bovine, and was identified as a new isolate of Escherichia coli. The strain showed extracellular cellulase activity and can produce ethanol and hydrogen from cellulose and corn straw, indicating the great potential application of the strain. Further metabolic and regulating investigation of the strain may reveal the secretion mechanisms of the enzymes in ZH-4.
Materials and Sample collection
All the chemicals used were of molecular biology or analytical grade. Corn straw (corn type: pioneer) was collected in mature period from the suburbs of Hohhot City, China. The whole corn straw was collected, cut into 2–3 cm in length, and then crushed by crusher and sieved by a 40-mesh sieve for use.
Rumen contents were collected from three Inner Mongolia bovine. Samples were collected and filtered by four layers of gauze into 150 mL sterile and N2-gassed glass containers with screw-top sealable metal lids and kept in 37 °C, and then brought to the laboratory for use.
Enrichment of cellulolytic bacteria
A 15% (v/v) inoculum of rumen samples were inoculated in enrichment medium A with microcrystalline cellulose as carbon source. Then, the serum bottles were incubated at 37 °C and 180 rpm for 6 days. After the first enrichment, 15% (v/v) inoculum of culture was inoculated in enrichment medium A with Whatman filter paper as carbon source replacing microcrystalline cellulose. The cellulose-degrading strains were enriched by three successive 6-day subcultures. The enrichment medium A was as follows: basic medium (500 mL): NaHCO3 5.0 g, peptone 1.0 g, yeast extract 1.0 g; Mineral A (165 mL): KH2PO4 0.3 g, (NH4)2SO4 0.3 g, NaCl 0.6 g, CaCl2·2H2O 0.04 g, MgSO4·7H2O 0.058 g; Mineral B (165 mL): K2HPO4·3H2O 0.396 g, cell-free rumen fluid 170 mL, l-cysteine hydrochloride 1.5 g, microcrystalline cellulose 5.0 g. The cell-free rumen fluid was obtained by the following procedure: rumen fluid was taken from the rumen of the Inner Mongolia bovine, and then filtered by 4 layers of gauze. The filtrate was centrifuged by 5000 rpm for 15 min at the first time and 15,000 rpm for 30 min for the second time.
Screening of cellulolytic bacteria
Congo red staining method was used for strain screening. 10-fold dilution of log-phase cells after three successive enrichment cultures were inoculated into screening Hungate roll tubes (Medium B), and subsequently prepared by rapidly rotating inoculated tubes under an ice bath. Then, Hungate roll tubes were incubated at 37 °C incubator and observed for the characteristic of colonies and transparent zones regularly. The colonies that formed transparent zones in the Carboxymethylcellulose (CMC) agar were picked out. The pure strain was confirmed through observing the colony and cell morphology. The single colonies were preserved. The impurity colonies were purified by roll tube isolation. Some representative cellulose-degrading bacteria were successfully isolated by this isolation procedure. The pure isolate with the biggest ratio of transparent circle and colony diameter among others was picked for further verification of cellulose degradation ability. Medium B was composed of of basic medium, Mineral A, Mineral B, cell-free rumen fluid 170 mL, l-cysteine hydrochloride 1.5 g, CMC 5.0 g, agar 20.0 g, Congo red dye 0.4 g. All of the medium were dispensed under N2 gassing into serum bottle or Hungate roll tubes with aluminum crimp and flanged butyl stoppers and then were autoclaved [35]. All operations must be under anaerobic conditions. The isolated cellulolytic strains were stored at −70 °C.
Identification of strains
The characteristic of colony which grow on the solid medium was recorded and cell morphology was observed through a microscope (biological microscope XS-212, China) and a transmission electron microscope (TEM H-700, Hitachi, Japan) with accelerating voltage 75 kV and magnification of 20,000×. Physical and biochemical parameters were identified with the kit of microbial biochemical identification and compared according to Bergey's Manual of Systematic Bacteriology [36]. Gram staining was carried out according to the Manual of Methods for General Bacteriology [37].
The purified bacterium was identified using the Biolog GEN III MicroPlate™, which contains 71 kinds of carbon source, 23 types of chemical sensitivity assays and positive and negative controls. Single colony from AN plates were transferred onto Biolog universal growth agar and incubated at 37 °C for 24 h. Colonies were picked by a sterile moistened Biolog cotton swab, and then were suspended in sterile inoculating fluid. The concentration was adjusted to match Biolog GEN III turbidity standards. The isolate was inoculated on a separate Biolog GEN III MicroPlate (Biolog Inc., Hayward, CA). After incubating at 37 °C for 24 h, the isolate was determined by MicroStation 2 Reader. The isolate will be identified if the Similarity (SIM) >0.5 after 24 h. SIM value referred to the similarity of the experimental result with corresponding data in the Biolog database.
16S rDNA sequencing, phylogenetic analysis, and genome sequencing
Molecular biology method is a more accurate method for strain identification. 16S rDNA gene sequencing is a widely used, effective, and simple technique. The genomic DNA was extracted from the cultures in exponential growth phase. 16S rDNA sequencing was carried out through the procedures of DNA extraction, PCR amplification, cloning, and sequencing in Sangon Biotech (Shanghai, China) [38]. The 16S rDNA gene sequences were compared with others using BLAST program. The phylogenetic tree was constructed based on a neighbor-joining algorithm with the MEGA version 7.0 software and the bootstrap consensus tree inferred from 1000 replicates was taken to represent the evolutionary history of the taxa analyzed [39]. The draft genome was completed by Novogene company (China).
Enzymatic assay
Two percentage (v/v) seed was inoculated in medium D (peptone 10 g/L, yeast extract 10 g/L, NaCl 10 g/L, microcrystalline cellulose 5 g/L), with microcrystalline cellulose as carbon source, and then incubated at 37 °C for 7 days under anaerobic and aerobic conditions, respectively. The fermentation broth was centrifuged with 8000 rpm for 5 min at 4 °C and the supernatant was used as a source of crude enzyme.
Three kinds of extracellular cellulase (exoglucanase, endoglucanase, and β-glucosidase) activities were measured by determining the amount of reducing sugar released from microcrystalline cellulose, CMC, and salicin according to that described by Wood TM [40]. The cellulolytic enzyme activities were determined in different pH value buffer solutions (5.8, 6.8, 7.8) under anaerobic and aerobic conditions, respectively. All spectrophotometric data were measured using a UV–Vis spectrophotometer (UV759; China). One unit (IU) of enzymatic activity is defined as the amount of enzyme that releases 1 μmol reducing sugars (measured as glucose) per mL fermentation supernatant per minute.
Fermentation of ZH-4 using cellulose and hemicellulose
Thirty milliliter medium A containing corn straw (15 g/L) or microcrystalline cellulose (5 g/L) as the carbon source was prepared, respectively, in 100 mL serum bottle. A 5% (v/v) inoculum isolates were inoculated in these medium under anaerobic conditions, respectively. Then, the serum bottles were incubated at 37 °C and 180 rpm for 1, 3, 5, and 7 days. The gas was collected using an airtight bag for GC–MS analysis that produced in the anaerobic serum bottle. The degradation ratio of cellulose and hemicellulose was measured by quantitative saccharification as described by Shao et al. [41]. The fermentation broth was centrifuged by 8000 rpm for 10 min. The supernatant was acidized by 10% sulfuric acid solution and then filtered by 0.22 μm filter membrane. 1 mL 72% H2SO4 was added to 28 mL supernatant and autoclaving at 121 °C for 60 min. After spinning filter, concentrations of ethanol and sugars were obtained using a Waters HPLC system. Conversions were calculated as a percentage of originally present glucan and xylan solubilized, based on the analysis of residual solids. Glucan solubilization and xylan solubilization were calculated according to Eq. 1 and Eq. 2. Avicel was composed of 99% glucan. Avicel degradation ratio was calculated by glucan solubilization according to Eq. 1.
$${\text{glucan solubilization }} = \left( {1 - \frac{{\frac{{\mathop C\nolimits_{{\text{glucose}}} \times 162}}{180} \times \mathop V\nolimits_{2} }}{{\mathop C\nolimits_{i} \times \mathop V\nolimits_{1} \times a}}} \right) \times 100\%$$
C glucose: the concentration of glucose in the hydrolysate of residual Avicel, g/L. C i : the initial concentration of Avicel or Corn straw, 5 g/L for Avicel, or 15 g/L for corn straw. V 1 : the broth volume after inoculation, L. V 2 : the initial culture medium, L. a: the percentage of glucan in Avicel or Corn straw. 99% in Avicel and 36.3% in Corn straw.
The 36.3% glucan and 14.9% xylan in the corn straw were determined according to the methods as reported by the National Renewable Energy Laboratory (NREL) [42].
Corn straw degradation ratio was calculated by glucan solubilization (Eq. 1) and xylan solubilization (Eq. 2).
$${\text{xylan solubilization }} = \left( {1 - \frac{{\frac{{\mathop C\nolimits_{{\text{xylan}}} \times 132}}{150} \times \mathop V\nolimits_{2} }}{{\mathop C\nolimits_{i} \times \mathop V\nolimits_{1} \times b}}} \right) \times 100\%$$
C xylan: the concentration of xylan in the hydrolysate of residual corn straw. C i : the initial concentration of Corn straw, 15 g/L. V 1 : the broth volume after inoculation, L. V 2 : the initial culture medium, L. b: the percentage of xylan in corn straw, 14.9%.
Analytical methods
The concentrations of ethanol and sugars in supernatant were analyzed by HPLC Aminex HPX-87H column (300 mm × 7.8 mm id, 9 μm). Aqueous sulfuric acid solution (8 mM) as a mobile phase was pumped at the rate of 0.5 mL/min and the column and refractive index detector temperatures are both 40 °C. The hydrogen accumulation was measured by a GC–MS (model SP-6800A, column model TDX-1) using the method of Li et al. [43]. The temperatures of column, heater, and detector temperatures were 160, 180, and 180 °C, respectively.
All experiments were conducted in triplicate and the data were presented as mean values ± standard deviation. An analysis of variance (ANOVA) for the obtained results was carried out using SAS 9.2 software (SAS INSTITUTE INC, North Carolina, USA).
CMC:
carboxymethylcellulose
CGMCC:
China General Microbiological Culture Collection Center
TEM:
transmission electron microscope
HPLC:
GC-MS:
gas chromatography–mass spectrometer
ANOVA:
Feng Y, Yu Y, Wang X, Qu Y, Li D, He W, Kim BH. Degradation of raw corn stover powder (RCSP) by an enriched microbial consortium and its community structure. Bioresour Technol. 2011;102:742–7.
Wongwilaiwalin S, Rattanachomsri U, Laothanachareon T, Eurwilaichitr L, Igarashi Y, Champreda V. Analysis of a thermophilic lignocellulose degrading microbial consortium and multi-species lignocellulolytic enzyme system. Enzyme Microb Technol. 2010;47:283–90.
Kong Y, Xia Y, Seviour R, He M, McAllister T, Forster R. In situ identification of carboxymethyl cellulose-digesting bacteria in the rumen of cattle fed alfalfa or triticale. FEMS Microbiol Ecol. 2012;80:159–67.
Wang W, Yan L, Cui Z, Gao Y, Wang Y, Jing R. Characterization of a microbial consortium capable of degrading lignocellulose. Bioresour Technol. 2011;102:9321–4.
Amore A, Pepe O, Ventorino V, Aliberti A, Faraco V. Cellulolytic bacillus strains from natural habitats: a review. Chimica Oggi Chem Today. 2013;31:49–52.
Saini R, Saini JK, Adsul M, Patel AK, Mathur A, Tuli D, Singhania RR. Enhanced cellulase production by Penicillium oxalicum for bio-ethanol application. Bioresour Technol. 2015;188:240–6.
Lynd LR, Weimer PJ, van Zyl WH, Pretorius IS. Microbial cellulose utilization: fundamentals and biotechnology. Microbiol Mol Biol Rev. 2002;66:506–77.
Maki M, Leung KT, Qin W. The prospects of cellulase-producing bacteria for the bioconversion of lignocellulosic biomass. Int J Biol Sci. 2009;5:500.
Sadhu S, Maiti TK. Cellulase production by bacteria: a review. Br Microbiol Res J. 2013;3:235–58.
Schwarz WH. The cellulosome and cellulose degradation by anaerobic bacteria. Appl Microbiol Biotechnol. 2001;56:634–49.
Hu Z-H, Yue Z-B, Yu H-Q, Liu S-Y, Harada H, Li Y-Y. Mechanisms of microwave irradiation pretreatment for enhancing anaerobic digestion of cattail by rumen microorganisms. Appl Energy. 2012;93:229–36.
Carrillo-Reyes J, Barragán-Trinidad M, Buitrón G. Biological pretreatments of microalgal biomass for gaseous biofuel production and the potential use of rumen microorganisms: a review. Algal Res. 2016;18:341–51.
Pitta DW, Kumar S, Veiccharelli B, Parmar N, Reddy B, Joshi CG. Bacterial diversity associated with feeding dry forage at different dietary concentrations in the rumen contents of Mehsana buffalo (Bubalus bubalis) using 16S pyrotags. Anaerobe. 2014;25:31–41.
Pitta DW, Indugu N, Kumar S, Vecchiarelli B, Sinha R, Baker LD, Bhukya B, Ferguson JD. Metagenomic assessment of the functional potential of the rumen microbiome in Holstein dairy cows. Anaerobe. 2016;38:50–60.
Kenters N, Henderson G, Jeyanathan J, Kittelmann S, Janssen PH. Isolation of previously uncultured rumen bacteria by dilution to extinction using a new liquid culture medium. J Microbiol Methods. 2011;84:52–60.
Egert M, Friedrich MW. Formation of pseudo-terminal restriction fragments, a PCR-related bias affecting terminal restriction fragment length polymorphism analysis of microbial community structure. Appl Environ Microbiol. 2003;69:2555–62.
Hunter S, Corbett M, Denise H, Fraser M, Gonzalez-Beltran A, Hunter C, Jones P, Leinonen R, McAnulla C, Maguire E, et al. EBI metagenomics—a new resource for the analysis and archiving of metagenomic data. Nucleic Acids Res. 2014;42:D600–6.
Hess M, Sczyrba A, Egan R, Kim TW, Chokhawala H, Schroth G, Luo S, Clark DS, Chen F, Zhang T. Metagenomic discovery of biomass-degrading genes and genomes from cow rumen. Science. 2011;331:463–7.
Wang C, Dong D, Wang H, Muller K, Qin Y, Wang H, Wu W. Metagenomic analysis of microbial consortia enriched from compost: new insights into the role of actinobacteria in lignocellulose decomposition. Biotechnol Biofuels. 2016;9:22.
Chang L, Ding M, Bao L, Chen Y, Zhou J, Lu H. Characterization of a bifunctional xylanase/endoglucanase from yak rumen microorganisms. Appl Microbiol Biotechnol. 2011;90:1933–42.
Lokapirnasari WP, Nazar DS, Nurhajati T, Supranianondo K, Yulianto AB. Production and assay of cellulolytic enzyme activity of Enterobacter cloacae WPL 214 isolated from bovine rumen fluid waste of Surabaya abbatoir. Indones. Vet World. 2015;8:367–71.
Patel K, Vaidya Y, Patel S, Joshi C, Kunjadia A. Isolation and characterization of cellulase producing bacteria from Rumen Fluid. Int J 2015;3:1103–1112.
Ko KC, Han Y, Choi JH, Kim GJ, Lee SG, Song JJ. A novel bifunctional endo-/exo-type cellulase from an anaerobic ruminal bacterium. Appl Microbiol Biotechnol. 2011;89:1453–62.
Rosewarne CP, Cheung JL, Smith WJM, Evans PN, Tomkins NW, Denman SE, Cuív PÓ, Morrison M. Draft genome sequence of Treponema sp. strain JC4, a novel spirochete isolated from the bovine rumen. J Bacteriol. 2012;194:4130.
Gao D, Luan Y, Liang Q, Qi Q. Exploring the N‐terminal role of a heterologous protein in secreting out of Escherichia coli. Biotechnol Bioeng. 2016;113(12):2561–7.
Coen JA, Dehority BA. Degradation and utilization of hemicellulose from intact forages by pure cultures of rumen bacteria. Appl Microbiol. 1970;20:362–8.
Flint HJ, Bayer EA, Rincon MT, Lamed R, White BA. Polysaccharide utilization by gut bacteria: potential for new insights from genomic analysis. Nat Rev Microbiol. 2008;6:121–31.
Teunissen MJ. Anaerobic fungi and their cellulolytic and xylanolytic enzymes. Antonie Van Leeuwenhoek. 1993;63:63–76.
Gao D, Luan Y, Wang Q, Liang Q, Qi Q. Construction of cellulose-utilizing Escherichia coli based on a secretable cellulase. Microb Cell Fact. 2015;14:159.
Atsumi S, Liao JC. Metabolic engineering for advanced biofuels production from Escherichia coli. Curr Opin Biotechnol. 2008;19:414–9.
Bokinsky G, Peralta-Yahya PP, George A, Holmes BM, Steen EJ, Dietrich J, Lee TS, Tullman-Ercek D, Voigt CA, Simmons BA. Synthesis of three advanced biofuels from ionic liquid-pretreated switchgrass using engineered Escherichia coli. Proc Natl Acad Sci USA. 2011;108:19949–54.
Bruijn FJD: 56. Suppressive subtractive hybridization reveals extensive horizontal transfer in the rumen metagenome. Wiley, Inc. 2011.
Garcia-Vallvé S, Romeu A, Palau J. Horizontal gene transfer of glycosyl hydrolases of the rumen fungi. Mol Biol Evol. 2000;17:352–61.
Liang Q, Qi Q. From a co-production design to an integrated single-cell biorefinery. Biotechnol Adv. 2014;32:1328–35.
Weimer PJ, Stevenson DM. Isolation, characterization, and quantification of Clostridium kluyveri from the bovine rumen. Appl Microbiol Biotechnol. 2012;94:461–6.
Vos P, Garrity G, Jones D, Krieg NR, Ludwig W, Rainey FA, Schleifer K-H, Whitman W: Bergey's Manual of Systematic Bacteriology: vol 3: The Firmicutes. Springer Science & Business Media; 2011.
Doetsch RN. Determinative methods of light microscopy. In: Gerhardt P, Murray RGE, Costilow RN, Nester EW, Wood WA, Krieg NR, editors. Manual of methods for general Bacteriology. Washington: American Society for Microbiology; 1981. p. 21–33.
Yang Y, Jia J, Xing J, Chen J, Lu S. Isolation and characteristics analysis of a novel high bacterial cellulose producing strain Gluconacetobacter intermedius CIs26. Carbohydr Polym. 2013;92:2012–7.
Tamura K, Dudley J, Nei M, Kumar S. MEGA4: molecular evolutionary genetics analysis (MEGA) software version 4.0. Mol Biol Evol. 2007;24:1596–9.
Wood TM, Bhat KM. Methods for measuring cellulase activities. Methods Enzymol. 1988;160:87–112.
Shao X, Jin M, Guseva A, Liu C, Balan V, Hogsett D, Dale BE, Lynd L. Conversion for avicel and AFEX pretreated corn stover by Clostridium thermocellum and simultaneous saccharification and fermentation: insights into microbial conversion of pretreated cellulosic biomass. Bioresour Technol. 2011;102:8040–5.
Sluiter A, Hames B, Ruiz R, Scarlata C: Determination of sugars, byproducts, and degradation products in liquid fraction process samples. 2006.
Li Q, Liu C-Z. Co-culture of Clostridium thermocellum and Clostridium thermosaccharolyticum for enhancing hydrogen production via thermophilic fermentation of cornstalk waste. Int J Hydrog Energy. 2012;37:10648–54.
JP did the experimental work and wrote the draft manuscript. ZYL took part in planning the study and participated in the design of the study and wrote the draft manuscript. MH carried out the isolation studies. YFZ took part in checking the results. QSQ participated in the design of the study and commented on the manuscript. All authors read and approved the final manuscript.
All authors approved the manuscript.
This research was supported by grants from the National Natural Science Foundation of China (NSFC) (Grant No. 61361016); Program for Young Talents of Science and Technology in the Universities of the Inner Mongolia Autonomous Region; West Light Foundation of The Chinese Academy of Sciences talent cultivation plan; Research Fund for the Doctoral Program of Higher Education of China (RFDP) (20131514120003); Foundation of Talent Development of Inner Mongolia and The "Prairie talent" project of Inner Mongolia (CYYC20130034).
School of Chemical Engineering, Inner Mongolia University of Technology, Hohhot, 010051, Inner Mongolia, China
Jian Pang
, Zhan-Ying Liu
, Min Hao
& Yong-Feng Zhang
Institute of Coal Conversion & Cyclic Economy, Inner Mongolia University of Technology, Hohhot, 010051, Inner Mongolia, China
Yong-Feng Zhang
State Key Laboratory of Microbial Technology, Shandong University, Jinan, 250100, China
Qing-Sheng Qi
Search for Jian Pang in:
Search for Zhan-Ying Liu in:
Search for Min Hao in:
Search for Yong-Feng Zhang in:
Search for Qing-Sheng Qi in:
Correspondence to Zhan-Ying Liu.
Cellulolytic Escherichia coli
|
CommonCrawl
|
Effects of limonene, n-decane and n-decanol on growth and membrane fatty acid composition of the microalga Botryococcus braunii
Eric Concha ORCID: orcid.org/0000-0003-4204-34581,
Hermann J. Heipieper2,
Lukas Y. Wick3,
Gustavo A. Ciudad4 &
Rodrigo Navia5
AMB Express volume 8, Article number: 189 (2018) Cite this article
Botryococcus braunii is a promising microalga for the production of biofuels and other chemicals because of its high content of internal lipids and external hydrocarbons. However, due to the very thick cell wall of B. braunii, traditional chemical/physical downstream processing very often is not as effective as expected and requires high amounts of energy. In this cases, the application of two-phase aqueous-organic solvent systems could be an alternative to cultivate microalgae allowing for a simultaneous extraction of the valuable compounds without significant negative effects on cell growth. Two-phase systems have been applied before, however, there are no studies so far on the mechanisms used by microalgae to survive in contact with solvents present as a second-phase. In this study, the effects of the solvents limonene, n-decane and n-decanol on growth of the microalga B. braunii as well as the adaptive cell response in terms of their phospholipid fatty acid contents were analized. A concentration-dependent negative effect of all three solvents on cell growth was observed. Effects were accompanied by changes of the membrane fatty acid composition of the alga as manifested by a decrease of the unsaturation . In addition, an association was found between the solvent hydrophobicity (given as log octanol–water partition coefficient (\(\text {P}_{O-W}\)) values) and their toxic effects, whereby n-decanol and n-decane emerged as the most and least toxic solvent respectively. Among the tested solvents, the latter promises to be the most suitable for a two-phase extraction system.
Botryococcus braunii is a microalga that can be found in fresh, brackish, and saline water all around the world (Aaronson et al. 1983). This microalga is considered to be a source of lipids and hydrocarbons and can thus possibly serve as a base for renewable fuel production (Ashokkumar and Rengasamy 2012). It is known that lipid productivity in B. braunii is higher when it is cultivated under nitrogen-depletion or other stress conditions (Cheng et al. 2013). However, the total amount of lipids available for biotechnological applications depends on the biomass productivity, that is normally reduced under stress conditions. Unlike lipids, hydrocarbon production is proportional to cell growth. Accordingly, more hydrocarbons are obtained when more biomass is produced (Kojima and Zhang 1999).
For extracting biotechnologically valuable products from microorganisms generally two different methods are used: (i) intensive extraction from harvested biomass (Cooney et al. 2009; Kumar et al. 2015) and (ii) continuous extraction in a two-phase aqueous-organic solvent system, during ongoing microbial growth (Kleinegris et al. 2011). This second approach has been used to extract valuable compounds such as carotenoids, lipids, and hydrocarbons from microalgae maintaining, whereby cell growth is, at least partially, maintained (Hejazi and Wijffels 2004; Sim et al. 2001; Zhang et al. 2011a).
Two-phase systems may also be advantageous for the extraction of hydrocarbons from B. braunii in a biofuel production context as: (i) most hydrocarbons of B. braunii are located outside of the cell wall (approx. 95% according to Largeau et al. (1980)), and are therefore more easily extractable than internal lipids; (ii) two-phase systems potentially allow for both, the ongoing cultivation of cells and the harvest of external hydrocarbons which move from aqueous to solvent phase.
Maintaining microbial growth in a two-phase system depends on the tolerance and adaptive properties of microorganisms to the conditions and solvents applied. Responses of bacteria in contact with solvents have been widely studied (Manefield et al. 2017) and data show that solvent effects on cell membrane include alterations in order, packing, and interaction of lipids–lipids and lipids–proteins, or impairments on membrane functions such as the selective permeability and enzymatic activity (Isken and de Bont 1998; Mattos 2001; Weber and de Bont 1996). Adaptive bacterial responses to counteract solvent effects include alterations of the content of their membrane phospholipid fatty acids, morphological changes, active solvent transport out of cell membrane, and modification of surface charge and hydrophobicity (Guan et al. 2017; Heipieper et al. 2007; Kusumawardhani et al. 2018; Segura et al. 2012).
A convenient proxy for the adaptation of the membrane of eukaryotic cells (including fungi and algae) is the fatty acid unsaturation index (UI) (Heipieper et al. 2000). This index is the average number of double bonds present in every lipid unit in the sample. In this experiment UI is the unsaturation level index of membrane fatty acids. Therefore, a decrease in the UI is related to a decrease in membrane fluidity and an increase in the rigidity of the cell membrane (Weber and de Bont 1996), as a response, for instance, to membrane fluidizing solvents.
Previous studies examining the effect of stress on the fatty acid profile of microalgae include effects of NaCl, irradiation, \({\text {CO}_{2}}\), temperature and heavy metals (Chen et al. 2008; Dawaliby et al. 2016; Kalacheva et al. 2002; McLarnon-Riches et al. 1998; Rao et al. 2007; Sushchik et al. 2003; Vazquez and Arredondo 1991; Yoshimura et al. 2013; Zhila et al. 2011). However, to our knowledge so far no study has addressed the solvent stress on changes of the UI of microalgae membrane fatty acids.
Depending on goals, solvent selection should be a balance among different solvent characteristics (Daugulis 1988). In this study, on the one hand hydrocarbon extraction capabilities and biocompatibility are necessary, but on the other hand a sustainable solvent production and an easy solvent-hydrocarbon separation (low energy cost) are desirable from a renewable fuel production perspective. These last two conditions, are reasons to consider limonene and decane candidates. The former is a non-petroleum derived, renewable solvent (Njoroge et al. 2004), whereas the latter is one of the lowest molecular weight highly biocompatible alkanes (León 2003). Decanol, the alcohol derived from decane, is less hydrophobic and, therefore, more water soluble, what could provide a higher hydrocarbon extraction although a lower biocompatibility. In this study, only biocompatibility will be tested.
Limonene has been used before to extract hydrophobic compounds such as oils and carotenoids from diverse types of matrices with good results (Chemat-Djenni et al. 2010; Mamidipally and Liu 2004; Tanzi et al. 2012; Virot et al. 2008a, b). Oil extraction yields, based on dry weight, have oscillated between 13.1% (Chemat-Djenni et al. 2010) and 48.6% (Virot et al. 2008b), although these values depend on oil content in their respective matrices. Limonene used to extract lipids from the microalga Chlorella vulgaris recovered 38.4% of its respective total lipids (Tanzi et al. 2012). Nevertheless, in our knowledge, limonene has never been used as solvent in a two-phase system to extract hydrophobic compounds.
Decane has been used as solvent to extract hydrophobic compounds from two-phase systems with alive microalga. Results have varied in biocompatibility and extraction capacity, oscillating from high (León 2003; León et al. 2001; Zhang et al. 2011b) to low (Hejazi et al. 2002; León et al. 2001) biocompatibility and from acceptable (León 2003; Zhang et al. 2011b) to poor (Mojaat et al. 2008) compound extraction capabilities. These results, however, depend on extraction system conditions and microalga species and should therefore be taken with caution.
In this study, we tested the effects of both mineral solvents, n-decane and its derived alcohol n-decanol, as well as the effects of the renewable solvent limonene, on the growth and membrane fatty acid profile of the microalga B. braunii in a two-phase aqueous-organic solvent system.
Preculture conditions
A 6 L preculture was established to supply biomass in an exponential growth phase for two-phase cultures. The microalga strain used in this experiment was Botryococcus braunii race A (UTEX LB572) provided by the Universidad de Antofagasta, Chile. The preculture was carried out in a 10 L glass bottle (Cat.No.11 602 00, Duran Group) using the medium described by Bazaes et al. (2012), but replacing \(\text {HPO}_3\) for \(\text {NaH}_2\text {PO}_4\). The pH was set at 6.5 using HCl (5 M) and the medium was autoclaved at 121 °C for 21 min. The microalga grew under continuous (24:0 h light:dark cycle) cool fluorescent illumination at ca. \(1500\text { lx}\), and 25 ± 1 °C with neither aeration nor \({\text {CO}_{2}}\) source. To prevent microalga precipitation the flasks were shaken twice a day manually.
Two-phase cultures
The experiment was set up as a two-factor factorial design, with solvent and solvent concentration as factors. When the biomass in precultures reached the exponential growth phase, 48 parts of the culture were taken (100 mL volume) and either limonene, n-decane and n-decanol were added in the necessary amount to obtain the following solvent concentrations (mM): (1) limonene: 123.3, 12.3, 1.2, 0.6, 0.3; (2) n-decane: 513.0, 282.2, 51.3, 28.2, 5.1; (3) n-decanol: 157.3, 15.7, 1.6, 0.8, 0.4. Concentrations were determined based upon literature (Frenz et al. 1989b; Liu and Mamidipally 2005; Mojaat et al. 2008; Zhang et al. 2011b) and toxicity assays (OECD 1984). According to the authors reports and pilot studies this range of concentrations produce quick cell death (higher rates among higher concentrations) but also fully functional cells to observe changes in membrane fatty acid composition. Three replicates, placed in 240 mL flasks with rubber caps, were used for each treatment, totalling 48 runs including three control samples (cultures without solvents). After 24 h, two aliquots were taken from every flask, one to measure biomass concentration changes (growth) and the other one to determine membrane fatty acid profile.
All conditions for two-phase cultures were the same as in preculture, including culture media and continuous illumination.
Cell growth measurement
Cell growth in culture and preculture was determined using a Coulter counter device (isoton II solution as diluent, 100 μm electrode, 1:500 dilution) (Neumann et al. 2005a; Nguyen et al. 2013; Ríos et al. 2012). Samples were taken in the morning, and after that two-phase cultures were shaken manually twice a day (12.00 and 20.00 h), to avoid that solvent droplets in samples modify microalga cell concentrations.
Solvent concentration in cell membrane
According to Sikkema et al. (1994) there is a direct correlation between the hydrophobicity given as log P values of a solvent and their partitioning in biological membranes. The following empirical relation was estimated: \(log(P_{M{-}W})=0.97*log(P_{O{-}W})-0.64\), where \(P_{M{-}W}\) and \(P_{O{-}W}\) are membrane/water and octanol/water partition coefficients, respectively. This equation allows to calculate solvent concentration in a membrane for a resting-system case, which will be helpful for result interpretation (Neumann et al. 2005b).
Characterization of membrane fatty acid profile
Membrane fatty acid profile was characterized for the control samples and biomass in contact with solvents 24 h after the first solvent-microalga contact. Membrane lipids were extracted according to Bligh and Dyer (1959) and transformed into fatty acid methyl ester (FAME) as described by Morrison and Smith (1964). FAME identification was performed using a GC-FID Agilent 6890N, equipped with a capillary chromatographic column (CP-Sil 88 capillary column, Chrompack, ID: 0.25 mm, longitude: 50 m, film: 0.2 μm). A proxy for the relevant presence of double bonds in the membrane fatty acid profile was calculated as follows:
$${\text{UI}} = {\frac{(\%{\text{C}}16:1 + \%{\text{C}}18:1) + (\%{\text{C}}18:2*2) + (\%{\text{C}}18:3*3)}{100}},$$
where UI is the unsaturation index (Heipieper et al. 2000; Kaszycki et al. 2013).
The experiment was set as a two-factor factorial design, with solvent and solvent concentration as factors. All experiments were carried out in triplicates. The obtained data were analyzed using analysis of variance (ANOVA) to detect significant differences between solvents or solvent concentration effects. The probability of \(\alpha\) (type I error) was set at 5%. All data processing and plots were made using the statistical computing software R (version 3.3.3) (R Core Team 2017).
Cell growth
The effect of three solvents of different log \(\text {P}_{O{-}W}\) on B. braunii growth was measured (Fig. 1). n-decanol (log \(\text {P}_{O{-}W}\) = 3.97) was found to be the most toxic solvent tested, resulting in a lower cell concentration compared to n-decane and limonene at quasi identical solvent concentrations (p-value < 0.01). In the case of limonene (log \(\text {P}_{O{-}W}\) = 4.23), cultures with concentrations lower to 1.2 mM showed higher growth than control samples (p-value = 0.03), i.e., values greater than 100% as illustrated in Fig. 1. Cultures using n-decane as second-phase (log \(\text {P}_{O{-}W}=5.01\)) grew similarly to the control samples up to 51.3 mM of solvent concentration (p-value = 0.89), and then slowly started to decrease to 70% of control sample growth. As expected, the general trend for all the solvents was a lower relative growth when solvent concentration increased and when log \(\text {P}_{O{-}W}\) decreased (see Fig. 1).
Effect of different concentrations of limonene (square), n-decanol (circle) and n-decane (triangle) on Botryococcus braunii UTEX LB572 growth, after 24 h solvent-biomass contact. Growth is expressed as a percentage of control samples. Every point is the average of three independent samples. Error bars represent standard error of the mean of the same three samples
Characterization of fatty acid profile from cells in contact with solvents
The fatty acid profile of cells from control samples, revealed that cell membranes of B. braunii contain mainly oleic acid (C18:1cis9, 28.0%) and palmitic acid (C16:0, 25.9%). The main effects of solvents on membrane fatty acid profile were observed on C16:0 and C18:1, and to a minor degree on C16:1. C18:2 and C18:3 showed no significant changes (Fig. 2). On the one hand, cells in contact with n-decanol and n-decane synthesized higher amounts of C16:0 (p-value < 0.01) on average, followed by a decrease in the content of C18:1, especially in decane (p-value < 0.01). On the other hand, limonene presents a monotonic ascending trend for C16:0 and the opposite for C18:1, for increasing solvent concentrations. These changes in fatty acid profile were reflected by different UIs, showing differences in response to both the solvent type and solvent concentration, suggesting predominance of saturated fatty acids and those fatty acids with one double bond.
Effect of n-decanol, n-decane and limonene on membrane fatty acid profile of Botryococcus braunii UTEX LB572 after 24 h biomass-solvent contact for 5 different concentrations (1 to 5) and control samples (0). The number 1 correspond to the lowest concentration for every solvent, meanwhile number 5 correspond to the highest one. All data represent the average and standard error of the mean of three independent samples
Addition of n-decane resulted in a decreased UI remaining at \(UI\approx 0.82\), regardless of the solvent concentration. The presence of limonene and n-decanol at the three lowest concentrations levels likewise lowered the UI to the following range: \(UI\approx\) 0.90–0.96. For these solvents, two higher concentrations did not result in important changes on the UI compared to the control samples (Fig. 3).
Effect of solvents on membrane fatty acid UI of B. braunii UTEX LB572, after 24 h solvent-biomass contact. Slashed horizontal line show control samples UI in every panel. Every point is the average of three independent samples. Bars show standard error of the mean
The aim of this study was to test the effects of n-decane, n-decanol, and limonene on growth and membrane fatty acid composition, in particular the UI of B. braunii UTEX LB572 cells. Addition of solvents to B. braunii led to differing extents of growth inhibition. At quasi equimolar concentration n-decanol was found to be the most toxic solvent followed by limonene and n-decane, which showed the least inhibitory effects (Fig. 1). Such data provide valuable information for a better evaluation of the relative physiological status of the cells and associated changes of their fatty acid profile and UIs as will be discussed below.
In 1994, Sikkema et al. (1994) hypothesized that solvent toxicity is primarily governed by the amount of solvent dissolved into the membrane rather than its chemical structure. Thus, the accumulation of molecules in the cell membrane of microorganisms would be the cause of negative effects on bilayer stability, packing of acyl chains and ion leakage problems, resulting in stress, arrest of growth, or even cell death in the extreme case (Weber and de Bont 1996). This hypothesis was supported by results of Heipieper et al. (1995), who, working with different types of solvents on Pseudomonas putida S12, found that the concentration in membrane that produces a 50% loss in growth is similar for all of them: between 60 and 200 mM (solvents used were: methanol, ethanol, 1-butanol, phenol, 1-hexanol, p-cresol, 4-chlorophenol, toluene, 1-octanol, and 2,4-dichlorophenol).
In this study, the membrane solvent concentration was calculated for the maximum water solubility for every solvent, according to the works by Sikkema et al. (1994) and Neumann et al. (2005b). Results in Table 1 show that, in a resting system, n-decane reached a maximum membrane concentration (MMC) around 6 mM. A low value compared with the range 60–200 mM. In contrast, MMC for limonene and n-decanol were higher than two hundred mM, 294 and 374 mM respectively, suggesting that this is the reason for the low toxicity of n-decane and high toxic effects of n-decanol on the microalga B. braunii. Although this is an approximate estimation of the actual solvent concentration in the cell membrane, it was consistent with results of the growth curves in Fig. 1. These curves show that, on average, solvents with higher log \(\text {P}_{O{-}W}\) are more biocompatible. This finding is in line with previous research on B. braunii and other microalgae (Frenz et al. 1989a, b; León et al. 2001; Zhang et al. 2011a).
Table 1 Physico-chemical properties of solvents used in the two-phase aqueous-organic system
Notably, for some concentrations of limonene and n-decane, growth reached values greater than 100%. A possible explanation is that within a certain range of concentration, solvents produced cell membrane instability, which in turn favours mass transfer between cells and culture medium. Consequently, nutrients and oxygen permeate more easily through cell membrane, accelerating growth (León et al. 2001). This high growth associated with limonene and n-decane is also in agreement with previous studies on Aerobacter aerogenes and Saccharomyces cerevisae (Jia et al. 1997; Rols et al. 1990), which reported that oxygen dissolves more easily in organic solvents compared with water, working as improved oxygen-vectors, thus incrementing the oxygen transfer rate and growth in B. braunii cultures. Another reason for the high growth could be that solvents are actually working, simultaneously, as carbon sources (de Carvalho and da Fonseca 2004; de Carvalho et al. 2005), which is possible as B. braunii has been reported as a mixotrophic microalga (Tanoi et al. 2010; Zhang et al. 2011b).
The adaptive response of B. braunii to solvent contact was similar for all solvents tested in our studies. The greatest changes in fatty acid profile were produced by n-decane, where C16:0 abundance was remarkably increased while C18:1 decreased. A reduction of C16:1 also occurred (Fig. 2). As a result of these changes, an UI drop from 1.02 (control samples) to around 0.82 for all n-decane concentrations was produced (Fig. 3). As Fig. 1 illustrates, cells in contact with n-decane seem to have a growth comparable to control samples for all concentrations. These outcomes suggest that n-decane, dissolved in culture media and cell membrane, was enough to stimulate cells to produce de novo synthesis of saturated and/or less unsaturated lipids to counteract increased fluidity, but not enough to stop cell growth (Figs. 1 and 3).
According to Piper (1995), solvent accumulation in cell membrane produce changes in membrane fatty acids similar to those produced by an increase in temperature, due to both stressors induce an increment in fluidity and loss of selective permeability. B. braunii exposed to rising temperatures showed a reduction in its UI (Kalacheva et al. 2002; Sushchik et al. 2003), as was also found in this study. Sushchik et al. (2003) observed that an increment from 30 °C up to 40 °C increased C16:0 from 56.3 up to 73.0% of total fatty acids, while simultaneously C18:2 and C18:3 were reduced from 14.9 to 8.8% and from 19.4 to 10.3%, respectively. In this study, however, there were no significant changes in linoleic (C18:2) or linolenic (C18:3) acid abundance, probably because the increase in membrane rigidity due to the reduction from 3 to 2, or 2 to 1 double bond is not as large as when the change is from 1 to 0 double bond, since the structure of a saturated fatty acid is linear. The underlying logic in a reduction of membrane fatty acid unsaturation is that saturated fatty acids counteract increasing membrane fluidity and permeability, due to rising temperature or solvent contact with cells, as they can be packed more tightly due to their straightness (Sikkema et al. 1995). Other microalgae exposed to a rise in temperature, such as Nannochloropsis sp. (Hu and Gao 2016) and Chlorella vulgaris (Sushchik et al. 2003) also showed a similar behaviour, reducing unsaturation. Meanwhile a reduction in temperatures, produce the opposite effect, i.e., increased fatty acid unsaturation to maintain membrane fluidity (Chen et al. 2008; McLarnon-Riches et al. 1998; Mikami and Murata 2003; Thompson et al. 1992).
Microalgae can also change fatty acid unsaturation levels to regulate membrane fluidity altered by modifications in environmental or anthropogenic factors such as light, heavy metals, CO2 or NaCl. However, the direction of changes are not always clear (Hu and Gao 2016; McLarnon-Riches et al. 1998; Tsuzuki et al. 1990; Zhila et al. 2011) since eukaryotes use others complementary mechanisms to regulate membrane stability, such as production of sterols or synthesis of metabolites to counteract osmotic pressure produced by salts (Rao et al. 2007; Vazquez and Arredondo 1991).
With regard to limonene and n-decanol an UI reduction was found (compared to control samples) at the three lower solvent concentrations, meaning cells were still able to perform changes at fatty acids synthesis level, despite of the stress produced by solvents. At the two higher solvent concentrations the UI remained comparable to the control samples for both solvents. Coincidently, higher concentrations of limonene and n-decanol produced lower growth rates compared to control samples suggesting that stress reduces synthesis of fatty acids, which is a requisite for a change in the saturated/unsaturated ratio (Segura et al. 2004).
In conclusion, this study confirms for B. braunii, what has been known for bacteria: B. braunii performs changes in lipid profile and unsaturation of membrane lipids in contact with solvents as a strategy to maintain membrane fluidity, tolerate stress and keep its growth. Additionally, as predicted by log \(\text {P}_{O{-}W}\), n-decanol was identified as the most aggressive solvent as second-phase; limonene had an intermediate effect, whereas n-decane seems to be able to maintain high growth rates even at high concentrations, being the most suitable solvent to extract valuable lipophilic compounds like hydrocarbons in a two-phase culture, under conditions used in this study.
FAME:
fatty acid methyl ester
lx:
milimolar
MMC:
maximum membrane concentration
\(\text {P}_{O-W}\) :
partition coefficient octanol-water
\(\text {P}_{W-M}\) :
partition coefficient water-membrane
UI:
unsaturation index
Aaronson S, Berner T, Gold K, Kushner L, Patni NJ, Repak A, Rubin D (1983) Some observations on the green planktonic alga, Botryococcus braunii and its bloom form. J Plankton Res 5(5):693–700
Ashokkumar V, Rengasamy R (2012) Mass culture of Botryococcus braunii Kutz. under open raceway pond for biofuel production. Bioresour Technol 104:394–399
Bazaes J, Sepulveda C, Acién F, Morales J, Gonzales L, Rivas M, Riquelme C (2012) Outdoor pilot-scale production of Botryococcus braunii in panel reactors. J Appl Phycol 24:1353–1360
Bligh EG, Dyer WJ (1959) A rapid method of total lipid extraction and purification. Can J Physiol Pharmacol 37(8):911–917
Chemat-Djenni Z, Ferhat MA, Tomao V, Chemat F (2010) Carotenoid extraction from tomato using a green solvent resulting from orange processing waste. J Essent Oil Bear Plants 13(2):139–147
Chen G-Q, Jiang Y, Chen F (2008) Variation of lipid class composition in Nitzschia laevis as a response to growth temperature change. Food Chem 109(1):88–94
Cheng P, Ji B, Gao L, Zhang W, Wang J, Liu T (2013) The growth, lipid and hydrocarbon production of Botryococcus braunii with attached cultivation. Bioresour Technol 138:95–100
Cooney M, Young G, Nagle N (2009) Extraction of bio-oils from microalgae. Sep Purif Rev 38(4):291–325
Daugulis AJ (1988) Integrated reaction and product recovery in bioreactor systems. Biotechnol Prog 4(3):113–122
Dawaliby R, Trubbia C, Delporte C, Noyon C, Ruysschaert J-M, Van Antwerpen P, Govaerts C (2016) Phosphatidylethanolamine is a key regulator of membrane fluidity in eukaryotic cells. J Biol Chem 291(7):3658–3667
de Carvalho CCCR, da Fonseca MMR (2004) Solvent toxicity in organic-aqueous systems analysed by multivariate analysis. Bioprocess Biosyst Eng 26(6):361–375
de Carvalho CCCR, Parreño-Marchante B, Neumann G, da Fonseca MMR, Heipieper HJ (2005) Adaptation of Rhodococcus erythropolis DCL14 to growth on n-alkanes, alcohols and terpenes. Appl Microbiol Biotechnol 67(3):383–388
Filipsson A, Bard J, Karlsson S (1998) Limonene: concise international chemical assessment document 5. World Health Organization, Geneva
Frenz J, Largeau C, Casadevall E (1989a) Hydrocarbon recovery by extraction with a biocompatible solvent from free and immobilized cultures of Botryococcus braunii. Enzyme Microb Technol 11(11):717–724
Frenz J, Largeau C, Casadevall E, Kollerup F, Daugulis AJ (1989b) Hydrocarbon recovery and biocompatibility of solvents for extraction from cultures of Botryococcus braunii. Biotechnol Bioeng 34:755–762
Guan N, Li J, Shin H-D, Du G, Chen J, Liu L (2017) Microbial response to environmental stresses: from fundamental mechanisms to practical applications. Appl Microbiol Biotechnol 101(10):3991–4008
Heipieper HJ, Loffeld B, Keweloh H, de Bont JAM (1995) The cis/trans isomerisation of unsaturated fatty acids in Pseudomonas putida S12: an indicator for environmental stress due to organic compounds. Chemosphere 30(6):1041–1051
Heipieper HJ, Isken S, Saliola M (2000) Ethanol tolerance and membrane fatty acid adaptation in adh multiple and null mutants of Kluyveromyces lactis. Res Microbiol 151(9):777–784
Heipieper HJ, Neumann G, Cornelissen S, Meinhardt F (2007) Solvent-tolerant bacteria for biotransformations in two-phase fermentation systems. Appl Microbiol Biotechnol 74(5):961–973
Hejazi MA, de Lamarliere C, Rocha JMS, Vermuë M, Tramper J, Wijffels RH (2002) Selective extraction of carotenoids from the microalga Dunaliella salina with retention of viability. Biotechnol Bioeng 79(1):29–36
Hejazi MA, Wijffels RH (2004) Milking of microalgae. Trends Biotechnol 22(4):189–194
Hu H, Gao K (2016) Response of growth and fatty acid compositions of Nannochloropsis sp. to environmental factors under elevated \(\text{CO}_2\) concentration. Biotechnol Lett 28(13):987–992
Isken S, de Bont JAM (1998) Bacteria tolerant to organic solvents. Extremophiles 2(3):229–238
Jia S, Wang M, Kahar P, Park Y, Okabe M (1997) Enhancement of yeast fermentation by addition of oxygen vectors in air-lift bioreactor. J Ferment Bioeng 84(2):176–178
Kalacheva GS, Zhila NO, Volova TG (2002) Lipid and hydrocarbon compositions of a collection strain and a wild sample of the green microalga Botryococcus. Aquat Ecol 36(2):317–331
Kaszycki P, Walski T, Hachicho N, Heipieper HJ (2013) Biostimulation by methanol enables the methylotrophic yeasts Hansenula polymorpha and Trichosporon sp. to reveal high formaldehyde biodegradation potential as well as to adapt to this toxic pollutant. Appl Microbiol Biotechnol 97(12):5555–5564
Kleinegris DMM, Janssen M, Brandenburg WA, Wijffels RH (2011) Two-phase systems: potential for in situ extraction of microalgal products. Biotechnol Adv 29(5):502–507
Kojima E, Zhang K (1999) Growth and hydrocarbon production of microalga Botryococcus braunii in bubble column photobioreactors. J Biosci Bioeng 87(6):811–815
Kumar RR, Rao PH, Arumugam M (2015) Lipid extraction methods from microalgae: a comprehensive review. Front Energy Res 2:61
Kusumawardhani H, Hosseini R, de Winde JH (2018) Solvent tolerance in bacteria: fulfilling the promise of the biotech era? Trends Biotechnol 36(10):1025–1039
Largeau C, Casadevall E, Berkaloff C, Dhamelincourt P (1980) Sites of accumulation and composition of hydrocarbons in Botryococcus braunii. Phytochemistry 19(6):1043–1051
León R (2003) Microalgae mediated photoproduction of \(\beta\)-carotene in aqueous-organic two phase systems. Biomol Eng 20(4–6):177–182
León R, Garbayo I, Hernández R, Vigara J, Vilchez C (2001) Organic solvent toxicity in photoautotrophic unicellular microorganisms. Enzyme Microb Technol 29(2–3):173–180
Liu SX, Mamidipally PK (2005) Quality comparison of rice bran oil extracted with d-limonene and hexane. Cereal Chem 82(2):209–215
Mamidipally PK, Liu SX (2004) First approach on rice bran oil extraction using limonene. Eur J Lipid Sci Technol 106(2):122–125
Manefield M, Lee M, Koenig J (2017) The nature and relevance of solvent stress in microbes and mechanisms of tolerance. In: Chénard C, Lauro F (eds) Microbial ecology of extreme environments. Springer, Cham, pp 201–213
Mattos C (2001) Proteins in organic solvents. Curr Opin Struct Biol 11(6):761–764
McLarnon-Riches CJ, Rolph CE, Greenway DLA, Robinson PK (1998) Effects of environmental factors and metals on Selenastrum capricornutum lipids. Phytochemistry 49(5):1241–1247
Mikami K, Murata N (2003) Membrane fluidity and the perception of environmental signals in cyanobacteria and plants. Prog Lipid Res 42(6):527–543
Mojaat M, Foucault A, Pruvost J, Legrand J (2008) Optimal selection of organic solvents for biocompatible extraction of beta-carotene from Dunaliella salina. J Biotechnol 133(4):433–441
Morrison WR, Smith LM (1964) Preparation of fatty acid methyl esters and dimethylacetals from lipids with boron fluoride-methanol. J Lipid Res 5(4):600–608
Neumann G, Veeranagouda Y, Karegoudar TB, Sahin Özlem, Mäusezahl I, Kabelitz N, Kappelmeyer U (2005a) Cells of Pseudomonas putida and Enterobacter sp. adapt to toxic organic compounds by increasing their size. Extremophiles 9(2):163–168
Neumann G, Kabelitz N, Zehnsdorf A, Miltner A, Lippold H, Meyer D, Schmid A, Heipieper HJ (2005b) Prediction of the adaptability of Pseudomonas putida DOT-T1E to a second phase of a solvent for economically sound two-phase biotransformations. Appl Environ Microbiol 71(11):6606–6612
Nguyen HM, Cuine S, Beyly-Adriano A, Legeret B, Billon E, Auroy P, Beisson F, Peltier G, Li-Beisson Y (2013) The green microalga Chlamydomonas reinhardtii has a single-3 fatty acid desaturase which localizes to the chloroplast and impacts both plastidic and extraplastidic membrane lipids. Plant Physiol 163(2):914–928
Njoroge SM, Koaze H, Karanja PN, Sawamura M (2004) Essential oil constituents of three varieties of kenyan sweet oranges (Citrus sinensis). Flavour Fragr J 20(1):80–85
OECD (1984) OECD guideline for testing of chemicals: alga growth inhibition test. vol. 201
Piper PW (1995) The heat shock and ethanol stress responses of yeast exhibit extensive similarity and functional overlap. FEMS Microbiol Lett 134(2):121–127
R Core Team (2017) R: a language and environment for statistical computing. R Foundation for Statistical Computing, Vienna
Rao AR, Dayananda C, Sarada R, Shamala TR, Ravishankar GA (2007) Effect of salinity on growth of green alga Botryococcus braunii and its constituents. Bioresour Technol 98(3):560–564
Ríos SD, Salvadó J, Farriol X, Torras C (2012) Antifouling microfiltration strategies to harvest microalgae for biofuel. Bioresour Technol 119:406–418
Rols JL, Condoret JS, Fonade C, Goma G (1990) Mechanism of enhanced oxygen transfer in fermentation using emulsified oxygen-vectors. Biotechnol Bioeng 35(4):427–435
Segura A, Duque E, Rojas A, Godoy P, Delgado A, Hurtado A, Cronan JE, Ramos J-L (2004) Fatty acid biosynthesis is involved in solvent tolerance in Pseudomonas putida DOT-T1E. Environ Microbiol 6(4):416–423
Segura A, Molina L, Fillet S, Krell T, Bernal P, Munoz-Rojas J, Ramos J-L (2012) Solvent tolerance in gram-negative bacteria. Curr Opin Biotechnol 23(3):415–421
Sikkema J, de Bont JA, Poolman B (1994) Interactions of cyclic hydrocarbons with biological membranes. J Biol Chem 269(11):8022–8028
Sikkema J, de Bont JA, Poolman B (1995) Mechanisms of membrane toxicity of hydrocarbons. Microbiol Rev 59(2):201–222
Sim S-J, An J-Y, Kim B-W (2001) Two-phase extraction culture of Botryococcus braunii producing long-chain unsaturated hydrocarbons. Biotechnol Lett 23(3):201–205
Sushchik NN, Kalacheva GS, Zhila NO, Gladyshev MI, Volova TG (2003) A temperature dependence of the intra and extracellular fatty-acid composition of green algae and cyanobacterium. Russ J Plant Physiol 50(3):374–380
Tanoi T, Kawachi M, Watanabe MM (2010) Effects of carbon source on growth and morphology of Botryococcus braunii. J Appl Phycol 23(1):25–33
Tanzi CD, Vian MA, Ginies C, Elmaataoui M, Chemat F (2012) Terpenes as green solvents for extraction of oil from microalgae. Molecules 17(7):8196–8205
Thompson PA, Guo M-X, Harrison PJ, Whyte JNC (1992) Effects of variation in temperature. ii. on the fatty acid composition of eight species of marine phytoplankton. J Phycol 28(4):488–497
Tsuzuki M, Ohnuma E, Sato N, Takaku T, Kawaguchi A (1990) Effects of \(\text{CO}_2\) concentration during growth on fatty acid composition in microalgae. Plant Physiol 3(93):851–856
Vazquez R, Arredondo BO (1991) Haloadaptation of the green alga Botryococcus braunii (race A). Phytochemistry 30(9):2919–2925
Virot M, Tomao V, Ginies C, Visinoni F, Chemat F (2008a) Green procedure with a green solvent for fats and oils determination. Microwave-integrated soxhlet using limonene followed by microwave clevenger distillation. J Chromatogr A 1196–1197:147–152. https://doi.org/10.1016/j.chroma.2008.04.035
Virot M, Tomao V, Ginies C, Chemat F (2008b) Total lipid extraction of food using d-limonene as an alternative to n-hexane. Chromatographia 68(3–4):311. https://doi.org/10.1365/s10337-008-0696-1
Weber FJ, de Bont JA (1996) Adaptation mechanisms of microorganisms to the toxic effects of organic solvents on membranes. Biochim Biophys Acta Rev Biomembr 1286(3):225–245
Yoshimura T, Okada S, Honda M (2013) Culture of the hydrocarbon producing microalga Botryococcus braunii strain Showa: Optimal \(\text{CO}_2\), salinity, temperature, and irradiance conditions. Bioresour Technol 133:232–239
Zhang F, Cheng L-H, Xu X-H, Zhang L, Chen H-L (2011a) Screening of biocompatible organic solvents for enhancement of lipid milking from Nannochloropsis sp. Process Biochem 46(10):1934–1941
Zhang H, Wang W, Li Y, Yang W, Shen G (2011b) Mixotrophic cultivation of Botryococcus braunii. Biomass Bioenergy 35(5):1710–1715
Zhila NO, Kalacheva GS, Volova TG (2011) Effect of salinity on the biochemical composition of the alga Botryococcus braunii Kütz IPPAS H-252. J Appl Phycol 23(1):47–52
EC and HJH designed experiment, analyzed data, and wrote the paper. EC performed the experiment. LYW, GAC and RN wrote the paper. All authors read and approved the final manuscript.
Acknowlegements
All raw data are available at the corresponding author.
This study was funded by CONICYT award numbers 781211006, 24121442, 75120043, Anillo de Investigación en Ciencia y Tecnología GAMBIO Project No. ACT172128, CONICYT and FONDECYT 1150707.
Doctoral Program in Science of Natural Resources, University of La Frontera, Av. Francisco Salazar 01145, Temuco, Chile
Eric Concha
Department of Environmental Biotechnology, Helmholtz Centre for Environmental Research-UFZ, Permoserstr. 15, 04318, Leipzig, Germany
Hermann J. Heipieper
Department of Environmental Microbiology, Helmholtz Centre for Environmental Research-UFZ, Permoserstr. 15, 04318, Leipzig, Germany
Lukas Y. Wick
Department of Chemical Engineering, Instituto del Medio Ambiente, Scientific and Technological Bio-resource Nucleus, University of La Frontera, Av. Francisco Salazar 01145, Temuco, Chile
Gustavo A. Ciudad
Department of Chemical Engineering, Centre for Biotechnology and Bioengineering (CeBiB), Scientific and Technological Bio-resource Nucleus, University of La Frontera, Av. Francisco Salazar 01145, Temuco, Chile
Rodrigo Navia
Search for Eric Concha in:
Search for Hermann J. Heipieper in:
Search for Lukas Y. Wick in:
Search for Gustavo A. Ciudad in:
Search for Rodrigo Navia in:
Correspondence to Eric Concha.
Concha, E., Heipieper, H.J., Wick, L.Y. et al. Effects of limonene, n-decane and n-decanol on growth and membrane fatty acid composition of the microalga Botryococcus braunii. AMB Expr 8, 189 (2018) doi:10.1186/s13568-018-0718-9
Botryococcus braunii
Two-phase system
Solvent tolerance
Fatty acid profile
|
CommonCrawl
|
Marcus A. Brubaker
[email protected]
Assistant Professor, York University
Publications (bib) (Google Scholar)
I am now an Assistant Professor at York University. This website will continue to exist for now but will not be updated further.
My new website is available here
There has been much going on behind the scenes these last few months. First, code for "Building Proteins in a Day: Efficient 3D Molecular Reconstruction" has finally been released on GitHub. Feel free to check it out and file any issues on the GitHub issue tracker.
Several new papers have been published including
Map-Based Probabilistic Visual Self-Localization in IEEE PAMI with Andreas Geiger and Raquel Urtasun,
Efficient Optimization for Sparse Gaussian Process Regression in IEEE PAMI with Yanshuai Cao, David Fleet and Aaron Hertzmann,
Alignment of cryo-EM movies of individual particles by optimization of image translation in Journal of Structural Biology with John Rubinstein,
Description and comparison of algorithms for correcting anisotropic magnification in cryo-EM images in Journal of Structural Biology with Jianhua Zhao, Samir Benlekbir and John Rubinstein,
The Stan Math Library: Reverse-Mode Automatic Differentiation in C++ in arXiv with the Stan Team.
Finally, as in previous years, I am teaching CSCC11: Introduction to Machine Learning and Data Mining again at UTSC.
I'm very pleased to report that my most recent paper "Building Proteins in a Day: Efficient 3D Molecular Reconstruction" with Ali Punjani and David Fleet has been accepted for oral presentation at CVPR 2015.
I have also been invited to give a symposium at the 2015 Conference on Computer and Robot Vision in Halifax this June. This will be something of a preview of my talk at CVPR 2015.
Following on Ali's presentation at the NIPS Machine Learning in Computational Biology Workshop we have published a technical report on arXiv: "Microscopic Advances with Large-Scale Learning: Stochastic Optimization for Cryo-EM".
We've also just published an arXiv paper about correcting magnification anisotropy in Cryo-EM images.
Ali Punjani and I will be presenting some of our recent work on 3D structure estimation in CryoEM at the NIPS Machine Learning in Computational Biology Workshop. If you're going to be at NIPS, come by and check it out.
A new paper with John Rubinstein has been posted to arXiv describing a method of particle movie alignment for CryoEM: Alignment of cryo-EM movies of individual particles by global optimization of image translations.
I will be, once again, teaching at UTSC this fall: CSCC11 - Introduction to Machine Learning
Recent work with Yali Wang and Raquel Urtasun on online filtering with Gaussian Processes is now available and will be presented at UAI 2014 in a few days.
Recent work with Yanshaui Cao, Aaron Hertzmann and David Fleet on sparse Gaussian Processes is now available on arXiv: Efficient Optimization for Sparse Gaussian Process Regression. This will be presented at NIPS 2013.
Stan 2.0 has just been released. Lots of big changes in this version so go check it out if you're interested in Bayesian statistical estimation and MCMC.
I will be speaking to the Toronto chapter of the IEEE Computer Society on September 26th on my recent localization work.
Recent work with Yanshaui Cao, Aaron Hertzmann and David Fleet on Gaussian Process sparsification has been accepted for publication at NIPS 2013. This paper will be available soon.
Also, this term I will be teaching CSCC11: Introduction to Machine Learning and Data Mining at the University of Toronto, Scarborough.
Source code for Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization is now available on the project website!
My recent project on vehicle localization using visual odoemtry has been written up in New Scientist.
I have also made available a PDF version of the slides used for my CVPR 2013 presentation. They can be accessed from the project website.
Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization was selected as the Best Paper Runner-Up at CVPR 2013! See the project website for the video, paper and poster, which was just recently uploaded.
My most recent project, "Lost! Leveraging the Crowd for Probabilistic Visual Self-Localization" with Andreas Geiger and Raquel Urtasun, has been accepted for an oral presentation at CVPR 2013. See the project website for the video and paper.
My paper with Jianhua Zhao and John L. Rubinstein is now available: TMaCS: A hybrid template matching and classification system for partially-automated particle selection. Source can be found here.
A paper with Jianhua Zhao and John L. Rubinstein, "TMaCS: A hybrid template matching and classification system for partially-automated particle selection" has been accepted for publication in the Journal of Structural Biology. It's not available yet, but look for it soon.
This fall I will be teaching CSCD11: Machine Learning and Data Mining at the University of Toronto, Scarborough.
Better late than never, some of the video sequences used in my papers are now available from here. Videos of results from a number of my papers have also been uploaded to YouTube.
I've begun organizing a Computer Vision Reading Group at U of T. The first meeting will be May 15th.
Our paper (with Mathieu Salzmann and Raquel Urtasun) "A Family of MCMC Methods on Implicitly Defined Manifolds" will be presented at AISTATS 2012. Matlab code is available here.
I have finished my PhD thesis "Physical Models of Human Motion for Estimation and Scene Analysis" and have started a postdoc with Raquel Urtasun (Toyota Technological Institute at Chicago) and David Fleet (University of Toronto). I am now also consulting as Research Associate with Cadre Research Labs.
Our paper "A Bayesian Method for 3-D Macromolecular Structure Inference using Class Average Images from Single Particle Electron Microscopy" has been accepted into the journal Bioinformatics. A preprint of the paper is available here and the project website can be found here.
Citation information has been updated for the IJCV article to include the volume, issue and starting page numbers.
Three new papers have been made available: "Estimating Contact Dynamics" (ICCV 2009), "Physics-based Person Tracking Using the Anthropomorphic Walker" (IJCV 2010), and "Video-based People Tracking" (In Handbook of Ambient Intelligence and Smart Environments).
Leonid Sigal, David Fleet and I will be running a tutorial for ICCV 2009: "Physics-Based Human Motion Modelling for People Tracking".
Map Localization through Visual Odometry
Self-localization is key for building autonomous systems such as self-driving cars. This project explores a novel localization system which relies only on visual input and freely available, community developed maps from the OpenStreetMap project. Based on these inputs, our system is able to quickly and efficiently determine the location of a vehicle to an accuracy of ~3m after only a few seconds of driving.
Code implementing the proposed method will be made available for non-commercial use.
Markov Chain Monte Carlo on Constrained Spaces
Traditional MCMC methods are only applicable to distributions defined on $\mathbb{R}^n$. However, there exist many application domains where the distributions cannot easily be defined on a Euclidean space. To address this limitation, we propose a general constrained version of Hamiltonian Monte Carlo, and give conditions under which the Markov chain is convergent.
Code implementing the proposed methods is available.
Stan is a package for performing Bayesian inference using the No-U-Turn sampler, a variant of Hamiltonian Monte Carlo. It provides a simple language in which to specify statistical models and uses automatic differentiation to efficiently compute the needed gradients to perform HMC.
I have been contributing to Stan since early 2012, working to include and optimize matrix calculations and allow Stan to efficiently work with larger models and bigger datasets.
Estimating Human Interactions using 3D Physics-based Models
Physics-based models of human motion provide both an important cue for estimation but, perhaps more importantly, provide significant information on the interactions of a person with the world. This work explores this connection by estimating 3D contact geometry and the forces driving a motion using a plausible 3D physical model of human motion.
Code to implement the model described was provided as part of our tutorial at ICCV 2009: "Physics-Based Human Motion Modelling for People Tracking".
Abstract Physics-based Models for Human Pose Tracking
This project explored the use of abstract, physics-based models of human locomation in tracking. Abstract models are used in robotics and biomechanics to capture key aspects of the dynamics of human locomotion. Some of these models exhibit humanoid-like walking as passive limit cycles of their motion, exhibiting their stability and applicability to human motion modelling. This project used these abstractions to build motion models which were then used to constrain human pose tracking.
Code and data is available.
Bayesian Methods for Electron Cryo-Microscopy
Electron Cryo-Microscopy (Cryo-EM) is a technique for recovering the 3D structure of molecules such as proteins and viruses. This project explored a technique for 3D density estimation based on a Bayesian formulation of the problem. The method performs ab initio inference of the three-dimensional structures of macromolecules from single particle electron cryo-microscopy experiments using class average images.
I am currently a Postdoctoral Fellow at the University of Toronto Scarborough. I also work as a Research Associate with Cadre Research Labs. Previously I worked with Raquel Urtasun (Toyota Technological Institute at Chicago). I finished my Ph.D. in September of 2011 supervised by David Fleet at the University of Toronto in the Computer Vision group. I received my M.Sc. and Honors B.Sc. from the University of Toronto in November 2006 and June 2004 respectively.
Feel free to contact me at [email protected].
Generally, I have a strong interest in machine learning and probabilistic methods, particularly when applied to computer vision related problems.
Recently I looked at the use of map data in computer vision applications such as localization through the use of visual odometry.
My PhD research looked at the use of physics for tracking human motion. I explored abstract models of bipedal walking in the context of monocular tracking with particle filters and the dynamics of complex articulated models of the human body in motion estimation and dynamic scene analysis.
I have also been interested in the problems surrounding Electron Cryo-Microscopy, an imaging technique used to estimate the structure of small molecules such as proteins and viruses. With colleagues I looked at the use of Bayesian methods for single particle reconstruction and have recently helped investigate using modern Machine Learning techniques in semi-supervised particle picking.
Hobbies and Other Interests
Cooking and food in general
|
CommonCrawl
|
Modeling the dynamics of Lassa fever in Nigeria
Mayowa M. Ojo ORCID: orcid.org/0000-0002-7867-47131,
B. Gbadamosi2,
Temitope O. Benson3,
O. Adebimpe4 &
A. L. Georgina5
Lassa fever is a zoonotic disease spread by infected rodents known as multimammate rats. The disease has posed a significant and major health challenge in West African countries, including Nigeria. To have a deeper understanding of Lassa fever epidemiology in Nigeria, we present a deterministic dynamical model to study its dynamical transmission behavior in the population. To mimic the disease's biological history, we divide the population into two groups: humans and rodents. We established the quantity known as reproduction number \({\mathcal {R}}_{0}\). The results show that if \({\mathcal {R}}_{0} <1\) then the system is stable, otherwise it is unstable. The model fitting was performed using the nonlinear least square method on cumulative reported cases from Nigeria between 2018 and 2020 to obtain the best fit that describes the dynamics of this disease in Nigeria. In addition, sensitivity analysis was performed, and the numerical solution of the system was derived using an iterative scheme, the fifth-order Runge–Kutta method. Using different numeric values for each parameter, we investigate the effect of all highest sensitivity indices' parameters on the population of infected humans and infected rodents. Our findings indicate that any control strategies and methods that reduce rodent populations and the risk of transmission from rodents to humans and rodents would aid in the population's control of Lassa fever.
The World Health Organization defines disease as any abnormal condition that impairs the function of an organism such as a human, animal, or plant. In humans, diseases are commonly defined as any medical condition characterized by specific symptoms such as pain, distress, dysfunction, or even death. Infectious diseases are caused by microorganisms and can spread from one host to another via direct or indirect transmission [1]. Because of their potential to cause illness and death worldwide, numerous infectious diseases have become a global health challenge. Some of these diseases are unique and are associated with specific regions and environments. Lassa fever (LF) is one of many infectious diseases that are emerging or reappearing in some West African countries. It has caused widespread and serious health problems in West African countries, for instance Nigeria, Liberia, Ghana, Guinea, and Sierra Leone [2]. However, according to the Centers for Disease Control and Prevention (CDC), as well as the World Health Organization (WHO), the annual case count for LF ranges between 100,000 and 300,000, with an estimated 5,000 deaths in West Africa [3, 4]. LF was first described in the 1950s, and the viral particle was discovered in 1969 in the northern part of Nigeria, in a city called Lassa in Borno state. Lassa hemorrhagic fever is another name for Lassa fever, and it is contracted through the Lassa virus (LV). The multimammate rat (Mastomys Natalensis) which is of the genus family Arenaviridae is the primary host of LV. The virus is primarily spread to humans through direct contact with contaminated food tainted by the urine or excrement of an infected rodent. Human-to-human transmission is uncommon, but it is possible if a person shares medical equipment with an infected person without proper sterilization. Furthermore, it could be transmitted by dust particles through mucous membranes or skin breaks in humans. Because of its ability to spread from infected animal to human, LF is classified as a zoonotic disease [3, 5]. In humans, Lassa fever symptoms include headaches, chest pain, nausea, cough, vomiting, diarrhea, muscle pain, abdominal pain, sore throat, and fever. In severe cases, symptoms may include swelling of the face, low blood pressure, fluid in the lungs, and bleeding from the nose, mouth, or vagina. In a more severe case, this disease can result in death within two weeks of the onset of symptoms [6]. Controlling LF in the population can be difficult due to the lack of a vaccine against the virus; however, an antiviral agent known as Ribavirin has been used as a treatment drug in regions where the disease is endemic [3]. According to previous research, the prevalence of Lassa fever is much increased during the wet season because more Mastomys rodents migrate from their natural habitat to the human environment in order to breed and gain proximity. This is the time when human contact with the rodent increases, thus increasing the force of infection and/or the rate of occurrence. Furthermore, previous research has established rainfall as a major ecological factor influencing and contributing to the transmission dynamics of LF, as the transmission probability rate is greater during the rainy season than during the dry season [7]. A variety of socioeconomic factors are also known to play a role in the dynamic spread of Lassa fever. These socioeconomic factors include, to name a few, educational level, occupation, and income, all of which influence the dynamic spread of Lassa fever due to a lack of amenities, malnutrition, an unclean environment, a low standard of living, insufficient health facilities, a lack of a good water source, and personal hygiene [2].
Numerous mathematical modelers and infectious disease experts have conducted studies to further enlighten and provide more information on the transmission dynamics and different approaches to control the endemic disease (see [2,3,4,5,6,7,8,9,10,11,12] for examples). We present a few examples of these studies, along with their methodologies, approaches, and findings. Ifeanyi developed a multiple-patch model in [2] to investigate the effects of socioeconomic class on Lassa fever transmission dynamics. The author performed a sensitivity analysis, which was followed by a numerical illustration of the effect of parameter models for spread of disease and incidence. Their findings show that humans' socioeconomic status has a significant influence on the dynamics of LF transmission. As a result, the study recommends that human socioeconomic classes be considered in order to achieve complete LF eradication in communities where it remains endemic. A study titled "Evaluation of rodent control to fight Lassa fever based on field data and mathematical modelling" was presented in [3]. The authors used a mathematical model to experiment various control strategies in rural upper Guinea to determine how long and frequently control should be performed in order to eliminate LF in rural areas. The control strategies employed in this study include annual density control, continuous density control, and rodent vaccination. According to their field data analysis, it is unlikely that a yearly control strategy will reduce LV spillover to humans due to the rapid recovery of the rodent population following rodenticides application. Furthermore, the mathematical model suggests that the best strategy for eradicating LV is continuous control or rodent vaccination. A spatial analysis of Lassa fever data from human cases and infected rodents from 1965 to 2007 was performed in [11], to describe the LF risk maps in West Africa. The authors look into the impact of environmental variables that are extrinsic such as temperature, vegetation, and rainfall on the transmission dynamics of LF in Cameroon. According to the study, rainfall has a strong influence in defining high-risk areas, whereas temperature has a lesser influence in defining high-risk areas. Furthermore, the risk maps revealed that the most dangerous region is situated between Guinea and Cameroon.
Bakare's research [6] developed a non-autonomous system of nonlinear ordinary differential equations that capture the dynamics of LF transmission and seasonal variation in Mastomys rodent birth. The authors of the study evaluate LF disease intervention strategies by predicting optimal intervention best fit in controlling the disease in the population using the elasticity of the equilibria prevalence. Early ribavirin treatments, as well as an early combination of intervention strategies such as effective community hygiene, proper isolation of infected humans, and rodent elimination, will facilitate effective disease control in the population. A mathematical model of the transmission dynamics of Lassa fever infection with control in two different but complementary hosts is presented in [10]. The model includes a death infectious human compartment that can infect a vulnerable individual. According to the study's findings, the best way to control secondary transmission dynamics from human to human is to establish more Lassa fever diagnostic centers and use precautionary burial practices.
Salihu's work is one of the studies that has investigated the dynamics of LF in Nigeria [5]. The authors developed a mechanistic model of the large-scale Lassa fever epidemics in Nigeria from 2016 to 2019 to describe the interaction between human and rodent populations while taking quarantine, isolation, and hospitalization processes into account. Their findings suggest that increasing quarantine and isolation of infected individuals reduce Lassa fever transmission from human to human. Their findings also indicate that across the three outbreaks, initial susceptibility increased from 2016 to 2019. Zhao conducted another study on the large-scale LF outbreak in Nigeria [7]. Their findings suggest that increasing quarantine and isolation of infected individuals reduce Lassa fever transmission from human to human. The authors investigate the epidemiological characteristics of LF epidemics in various Nigeria states by quantifying the relationship between disease reproduction number and local rainfall using the Richards growth model, three-parameter logistic, Gompertz, and Weibull growth models. Surveillance data were also used to fit the respective growth models in order to estimate the reproduction number and epidemic turning points. Overall, the study finds that rainfall has a significant impact on the transmission of LF in Nigeria.
To better understand the dynamic transmission of Lassa fever in Nigeria, we developed a deterministic model using systems of ordinary differential equations and critically analyzed it both analytically and numerically, in order to provide a more comprehensive understanding of the spread of LF using real cumulative data from the country. The remaining of the article is organized as follows: In Sect. , we present the formulation of the Lassa fever mathematical model. In Sect. , we performed a mathematical analysis on the formulated model, which includes determining the positivity of solutions, the invariant region, and the stability of the Lassa fever free equilibrium. In Sect. , we performed data fitting and parameter estimation. This includes the sensitivity analysis of the model parameters. Numerical results and discussion of the analytical findings and the study conclusion are presented in Sects. and , respectively.
Formulation of mathematical model
The core objectives of this study will be achieved via the development, analysis, parameterization of the model with real data from Nigeria, and simulations with different scenarios of Kermack–McKendrick-type SEIR (susceptible, exposed, infected, and recovered) epidemic model for the transmission dynamics of Lassa fever in Nigeria. Since Lassa fever is a hemorrhagic feverish condition transmitted between two host (humans and rodents), we derived our model by classifying the host population into two, namely human and rodent population.
The total human population at time t, denoted by \(N_{h}(t)\) , is further divided into susceptible, exposed, infectious, and recovered \((S_{h}, E_{h}, I_{h}, R_{h})\). Furthermore, the total rodent population at time t, denoted by \(N_{r}(t)\) , is divided into susceptible rodents \((S_{r})\) and infectious rodents \((I_{r})\). Hence, the total human and rodent population at a given time are given as \(N_{h}(t)=S_{h}+E_{h}+I_{h}+R_{h}\) and \(N_{r}(t)=S_{r}+I_{r}\) , respectively. We model the progression of each subpopulation from one class to another based on their disease status. The susceptible human populace is populated by recruitment rate \(\Lambda _{h}\), through birth or immigration and from recovered human individuals due to their loss of immunity at the rate \(\tau _{h}\). The susceptible human population is depopulated by infection following effective contact with infected individuals at the rates \(\beta _{1}\) given by
$$\begin{aligned} \beta _{1}=\frac{\beta _{r}I_{r}+\beta _{h}I_{h}}{N_{h}} \end{aligned}$$
The parameters \(\beta _{h}, \beta _{r}\) are the effective transmission probability per contact with infected humans and rodents, respectively. We assume that all human and rodent subpopulation are reduced by natural death at rate \(\mu _{h}\) and \(\mu _{r}\) , respectively. Following the infection of susceptible individuals, they progress to the exposed class. This is the stage where individuals undergo the infection incubation period. Exposed individuals become infectious and progress to increase the infectious class at the rate \(\sigma _{h}\). Infectious subpopulation is reduced by recovery due to treatment at the rate \(\phi _{h}\) and disease-induced at the rate \(\delta _{h}\) (the death due to the disease). The recovered subpopulation is populated by the recovery rate of infectious individuals and further reduced by loss of immunity of recovered individuals. The rodent susceptible subpopulation is populated by birth of rodents at the rate \(\Lambda _{r}\). This subpopulation is reduced by infection following effective contact with infected rodents at the rate \(\beta _{2}\) given by
$$\begin{aligned} \beta _{2}=\frac{\beta _{r}I_{r}}{N_{r}} \end{aligned}$$
The parameter \(\beta _{r}\) is the effective transmission probability per contact with infected rodents. Following the descriptions above, the deterministic system of nonlinear differential equations describing the dynamics of Lassa fever in the population is given as
$$\begin{aligned} \frac{dS_{h}}{dt}= & {} \Lambda _{h} + \tau _{h}R_{h}- \beta _{1}S_{h}-\mu _{h}S_{h}\nonumber \\ \frac{dE_{h}}{dt}= & {} \beta _{1}S_{h}-(\sigma _{h}+\mu _{h})E_{h} \nonumber \\ \frac{dI_{h}}{dt}= & {} \sigma _{h}E_{h}-(\phi _{h}+\mu _{h}+\delta _{h})I_{h} \nonumber \\ \frac{dR_{h}}{dt}= & {} \phi _{h}I_{h}-(\mu _{h}+\tau _{h})R_{h} \nonumber \\ \frac{dS_{r}}{dt}= & {} \Lambda _{r}-\beta _{2}S_{r}-\mu _{r}S_{r} \nonumber \\ \frac{dI_{r}}{dt}= & {} \beta _{2}S_{r}-\mu _{r}I_{r} \end{aligned}$$
subject to the following initial conditions \(S_{h}(0)>0, E_{h}(0) \ge 0, I_{h}(0)\ge 0, R_{h}(0)\ge 0, S_{r}(0)>0\), and \(I_{r}(0)\ge 0\). The descriptions of the model parameters and variables are given in Table 1, and the schematic diagram is given in Fig. 1.
Schematic diagram of the Lassa fever model (1)
Table 1 Description of the variables and parameters of the Lassa fever model (1)
Mathematical analysis
Positivity of solutions
In this section, the basic properties of model (1) will be explored. Since model (1) describes both human and rodents populations during the course of a Lassa fever epidemic, it will only be epidemiologically meaningful if all its state variables are nonnegative for all time \(t\ge 0\). In other words, solutions of the model system (1) with nonnegative initial data will remain nonnegative for all time \(t>0\).
The solutions \(S_{h}(t), E_{h}(t), I_{h}(t), R_{h}(t), S_{r}(t)\), and \(I_{r}\) of the model system (1) with nonnegative initial conditions \(S_{h}(0); E_{h}(0); I_{h}(0); R_{h}(0); S_{r}(0); I_{r}(0)\) will remain nonnegative for all time \(t>0\).
Let \(t_{1}=\sup \{t>0: S_{h}(t)>0, E_{h}(t)>0, I_{h}(t)>0, R_{h}(t)>0, S_{r}(t)>0, I_{r}(t)>0 \in [0,t] \}\). Thus, \(t_{1}>0\). It follows from the first equation of system (1), that
$$\begin{aligned} \frac{dS_{h}}{dt}= & {} \Lambda _{h} +\tau _{h} R_{h}-\beta _{1}S_{h}-\mu _{h}S_{h}\ge \Lambda _{h}-\beta _{1}S_{h}-\mu _{h}S_{h} \end{aligned}$$
Employing the integrating factor method, this can be written as:
$$\begin{aligned} \frac{d}{dt}\left( S_{h}(t)exp\left[ \mu _{h}t+\int _{0}^{t}\beta _{1}(x)dx\right] \right) \ge \Lambda _{h}exp\left[ \mu _{h}t+\int _{0}^{t}\beta _{1}(x)dx\right] \end{aligned}$$
Hence,
$$\begin{aligned} S_{h}(t_{1})exp\left[ \mu _{h}t_{1}+\int _{0}^{t_{1}}\beta _{1}(x)dx\right] -S_{h}(0)\ge \int _{0}^{t_{1}}\Lambda _{h}\left( exp\left[ \mu _{h}y+\int _{0}^{y}\beta _{1}(x)dx\right] \right) dy \end{aligned}$$
so that,
$$\begin{aligned} S_{h}(t_{1})\ge & {} S_{h}(0) exp\left[ -\mu _{h}t_{1}-\int _{0}^{t_{1}}\beta _{1}(x)dx\right] \\&+ exp\left[ -\mu _{h}t_{1}-\int _{0}^{t_{1}}\beta _{1}(x)dx\right] \times \int _{0}^{t_{1}}\Lambda _{h}\left( exp\left[ \mu _{h}y+\int _{0}^{y}\beta _{1}(x)dx\right] \right) dy >0. \end{aligned}$$
Similarly, it can be shown that \(E_{h}(t)\ge 0\), \(I_{h}(t)\ge 0\), \(R_{h}(t)\ge 0\), \(S_{r}(t)>0\), and \(I_{r}(t)\ge 0\) for all time \(t>0\). Therefore, all the solutions of model (1) remain positive for all nonnegative initial conditions. \(\square\)
Invariant region
In this section, model (1) will be analyzed in a biologically feasible region as follows. Consider the biologically feasible region consisting of \(\Omega =\Omega _{h}\times \Omega _{r} \in {\mathcal {R}}_{+}^{4}\times {\mathcal {R}}_{+}^{2}\) with
$$\begin{aligned} \Omega _{h}=\left\{ S_{h}, E_{h}, I_{h}, R_{h}\in {\mathcal {R}}_{+}^{4}: N_{h}\le \frac{\Lambda _{h}}{\mu _{h}}\right\} \end{aligned}$$
$$\begin{aligned} \Omega _{r}=\left\{ S_{r}, I_{r} \in {\mathcal {R}}_{+}^{2}: N_{r}\le \frac{\Lambda _{r}}{\mu _{r}} \right\} \end{aligned}$$
It can be shown that the set \(\Omega\) is a positively invariant set and global attractor of this system. This implies any phase trajectory initiated anywhere in the nonnegative region \({\mathcal {R}}_{+}^{6}\) enters the feasible region \(\Omega\) and remains in \(\Omega\) thereafter.
The biological feasible region \(\Omega =\Omega _{h}\cup \Omega _{r} \subset {\mathcal {R}}_{+}^{4}\times {\mathcal {R}}_{+}^{2}\) of the Lassa fever model (1) is positively invariant with nonnegative initial conditions in \({\mathcal {R}}_{+}^{6}\).
The following steps are followed to establish the positive invariance of \(\Omega\) (i.e., solutions in \(\Omega\) remain in \(\Omega\) for all \(t>0\)). The rate of change of the total human and rodent populations \(N_{h}\) and \(N_{r}\) , respectively, are obtained by adding the respective components of model (1) which result to
$$\begin{aligned} \frac{dN_{h}(t)}{dt}= & {} \Lambda _{h}-\mu _{h}N_{h}(t)-\delta _{h}I_{h}(t) \\ \frac{dN_{r}(t)}{dt}= & {} \Lambda _{r}-\mu _{r}N_{r}(t) \end{aligned}$$
$$\begin{aligned} \frac{dN_{h}(t)}{dt} \le \Lambda _{h}-\mu _{h}N_{h}(t), \quad and \qquad \frac{dN_{r}(t)}{dt}= \Lambda _{r}-\mu _{r}N_{r}(t) \end{aligned}$$
Hence, \(N_{h}(t)\le N_{h}(0)e^{-\mu _{h}t} +\frac{\Lambda _{h}}{\mu _{h}}\left( 1-e^{-\mu _{h}t}\right)\) and \(N_{r}(t)= N_{r}(0)e^{-\mu _{r}t} +\frac{\Lambda _{r}}{\mu _{r}}\left( 1-e^{-\mu _{r}t}\right)\). In particular, \(N_{h}(t)\le \frac{\Lambda _{h}}{\mu _{h}}\) and \(N_{r}(t)\le \frac{\Lambda _{r}}{\mu _{r}}\) if the total human population and rodent population at the initial instant of time, \(N_{h}(0)\le \frac{\Lambda _{h}}{\mu _{h}}\) and \(N_{r}(0)\le \frac{\Lambda _{r}}{\mu _{r}}\) , respectively. So, the region \(\Omega\) is positively invariant. Thus, it is consequently adequate to consider the dynamics of Lassa fever governed by model (1) in the biological feasible region \(\Omega\), where the model is considered to be epidemiologically and mathematically well posed [13, 14]. \(\square\)
Existence and Stability of Lassa fever free equilibrium (LFFE)
The Lassa fever free equilibrium of model (1) denoted by \({\mathcal {E}}_{0}\) is given by
$$\begin{aligned} {\mathcal {E}}_{0}=(S_{h}^{*}, E_{h}^{*}, I_{h}^{*}, R_{h}^{*}, S_{r}^{*}, I_{r}^{*})= & {} \left( \frac{\Lambda _{h}}{\mu _{h}}, 0, 0, 0, \frac{\Lambda _{r}}{\mu _{r}}, 0 \right) \end{aligned}$$
The next-generation matrix method is used on system (1) for determining the reproduction number \({\mathcal {R}}_{0}\). The epidemiological quantity \({\mathcal {R}}_{0}\), called the reproduction number, measures the typical number of Lassa fever cases that a Lassa fever-infected individual can generate in a human population that is completely susceptible [13, 15]. The \({\mathcal {R}}_{0}\) is used in investigating the local asymptotic stability of the Lassa fever free equilibrium \({\mathcal {E}}_{0}\). By using the infected compartments (\(E_{h}^{*}, I_{h}^{*}, I_{r}^{*}\)) at the LFFE, and following the notation in [16, 17], the Jacobian matrices F and V for the new infection terms and the remaining transfer terms are, respectively, given by
$$\begin{aligned} F=\begin{pmatrix} 0&{}&{}\frac{\beta _{h} S_{h}^{*}}{N_{h}^{*}}&{}&{}\frac{\beta _{r} S_{h}^{*}}{N_{h}^{*}}\\ \\ 0 &{}&{} 0&{}&{}0\\ \\ 0 &{}&{} 0&{}&{}\frac{\beta _{r} S_{h}^{*}}{N_{h}^{*}} \end{pmatrix} \qquad \qquad and \qquad \qquad V=\begin{pmatrix} k_{1}&{}&{} 0 &{}&{}0\\ \\ -\sigma _{h}&{}&{} k_{2} &{}&{} 0\\ \\ 0 &{}&{} 0 &{}&{} \mu _{r} \end{pmatrix} \end{aligned}$$
It follows that the basic reproduction number of model (1) is given by \({\mathcal {R}}_{0}=\rho (FV^{-1})\), where \(\rho\) is the spectral radius of the matrix. Hence,
$$\begin{aligned} {\mathcal {R}}_{0}={\mathcal {R}}_{h}+{\mathcal {R}}_{r}= & {} \frac{\beta _{h}\mu _{r}\sigma _{h}+\beta _{r}k_{1}k_{2}}{k_{1}k_{2}\mu _{r}} \end{aligned}$$
where \({\mathcal {R}}_{h}= \frac{\beta _{h}\sigma _{h}}{k_{1}k_{2}}\), \({\mathcal {R}}_{r}= \frac{\beta _{r}}{\mu _{r}}\), \(k_{1}=\sigma _{h}+\mu _{h}\), and \(k_{2}=\mu _{h}+\delta _{h}+\phi _{h}\). From the threshold quantity \({\mathcal {R}}_{0}\) given above in (5), the quantity \({\mathcal {R}}_{h}\) measures the contribution of Lassa fever risk caused by human in the population, while the quantity \({\mathcal {R}}_{r}\) measures the quantity of Lassa fever risk caused by rodent in the population. It must be noted that the increase in any of the threshold quantity will directly upsurge the risk of Lassa fever in the population. The following result is established.
The Lassa fever free equilibrium \({\mathcal {E}}_{0}\) of model (1) is locally asymptotically stable in the biological feasible region \(\Omega\) if \({\mathcal {R}}_{0}<1\) and unstable if \({\mathcal {R}}_{0}>1\).
In order to prove the lemma above, we obtain the Jacobian matrix by evaluating system (1) at Lassa fever free equilibrium \({\mathcal {E}}_{0}\) as
$$\begin{aligned} {\mathcal {J}}({\mathcal {E}}_{0})=\begin{pmatrix} -\mu _{h}&{}0&{}-\beta _{h}&{}\tau _{h}&{}0&{}-\beta _{r}\\ 0&{}-k_{1}&{}\beta _{h}&{}0&{}0&{}\beta _{r}\\ 0&{}\sigma _{h}&{}-k_{2}&{}0&{}0&{}0\\ 0&{}0&{}\phi _{h}&{}-k_{3}&{}0&{}0\\ 0&{}0&{}-0&{}0&{}-\mu _{r}&{}-\beta _{r}\\ 0&{}0&{}0&{}0&{}0&{}-\mu _{r}+\beta _{r} \end{pmatrix} \end{aligned}$$
where \(k_{1}=\sigma _{h}+\mu _{h}\), \(k_{2}=\mu _{h}+\delta _{h}+\phi _{h}\), and \(k_{3}=\mu _{h}+\tau _{h}\). From (6), it is sufficient to show that all the eigenvalues of \({\mathcal {J}}({\mathcal {E}}_{0})\) are negative. We obtain the first four eigenvalues as \(-\mu _{r}\), \(-\mu _{h}\), \(-(\mu _{r}-\beta _{r})\) and \(-k_{3}\). It must be noted that \(-(\mu _{r}-\beta _{r})\) can also be re-written as \(-\mu _{r}\left( 1-{\mathcal {R}}_{r}\right)\), where \({\mathcal {R}}_{r}=\frac{\beta _{r}}{\mu _{r}}\). The remaining eigenvalues can be obtained from the sub-matrix \({\mathcal {M}}\) which is written as
$$\begin{aligned} {\mathcal {M}}=\begin{pmatrix} -k_{1}&{}&{}\beta _{h}\\ \\ \sigma _{h}&{}&{}-k_{2} \end{pmatrix} \end{aligned}$$
According to the Routh–Hurwitz condition, all the matrix \({\mathcal {M}}\) are real and negative if
Trace(\({\mathcal {M}}\))\(<0\)
Determinant(\({\mathcal {M}}\))\(>0\)
It can be shown that,
$$\begin{aligned} Tr({\mathcal {M}})=-(k_{1}+k_{2})<0 \end{aligned}$$
$$\begin{aligned} Det({\mathcal {M}})= k_{1}k_{2}-\beta _{h}\sigma _{h}=k_{1}k_{2}\left( 1-\frac{\beta _{h}\sigma _{h}}{k_{1}k_{2}}\right) =k_{1}k_{2}(1-{\mathcal {R}}_{h})>0 \quad if \quad {\mathcal {R}}_{h} \in {\mathcal {R}}_{0}<1 \end{aligned}$$
Thus, all the eigenvalues of the Jacobian matrix (6) are real and negative if \(\left\{ {\mathcal {R}}_{r}, {\mathcal {R}}_{h} \right\}\) \(\in\) \({\mathcal {R}}_{0}<1\), so that the Lassa fever free equilibrium \({\mathcal {E}}_{0}\) is locally asymptotically stable and unstable otherwise. \(\square\)
From an epidemiological perspective, Lemma 3 implies that the spread of Lassa fever can be effectively controlled in the population when \({\mathcal {R}}_{0}\) is less than unity, if the initial sizes of the subpopulations of the model system (1) are in the basin of attraction of the Lassa fever free equilibrium \({\mathcal {E}}_{0}\).
Existence of Lassa fever endemic equilibrium (EEP)
We shall investigate the existence of the Lassa fever endemic equilibrium for system (1). The endemic equilibria denoted by \({\mathcal {E}}_{1}=(S_{h}^{**}, E_{h}^{**}, I_{h}^{**}, R_{h}^{**}, S_{r}^{**}, I_{r}^{**})\) represents the steady-state solution in the presence of the disease. By setting the right-hand sides of system (1) to zero and solving simultaneously in terms of the associated force of infection, it gives
$$\begin{aligned} S_{h}^{**}= & {} \frac{\Lambda _{h}k_{1}k_{2}k_{3}}{k_{1}k_{2}k_{3}\beta _{1}^{**} + k_{1}k_{2}k_{3}\mu _{h} - \beta _{1}^{**}\sigma _{h}\tau _{h}\phi _{h}} \nonumber \\ E_{h}^{**}= & {} \frac{\beta _{1}^{**}\Lambda _{h}k_{2}k_{3}}{k_{1}k_{2}k_{3}\beta _{1}^{**} + k_{1}k_{2}k_{3}\mu _{h} - \beta _{1}^{**}\sigma _{h}\tau _{h}\phi _{h}} \nonumber \\ I_{h}^{**}= & {} \frac{\beta _{1}^{**}\Lambda _{h}\sigma _{h}k_{3}}{k_{1}k_{2}k_{3}\beta _{1}^{**} + k_{1}k_{2}k_{3}\mu _{h} - \beta _{1}^{**}\sigma _{h}\tau _{h}\phi _{h}} \nonumber \\ R_{h}^{**}= & {} \frac{\beta _{1}^{**}\Lambda _{h}\sigma _{h}\phi _{h}}{k_{1}k_{2}k_{3}\beta _{1}^{**} + k_{1}k_{2}k_{3}\mu _{h} - \beta _{1}^{**}\sigma _{h}\tau _{h}\phi _{h}} \nonumber \\ S_{r}^{**}= & {} \frac{\Lambda _{r}}{\beta _{2}^{**} + \mu _{r} }, \qquad \qquad I_{r}^{**}=\frac{\beta _{2}^{**}\Lambda _{r}}{\mu _{r}(\beta _{2}^{**} + \mu _{r})} \end{aligned}$$
where the force of infection is given as
$$\begin{aligned} \beta _{1}^{**}=\frac{\beta _{r}I_{r}^{**}+\beta _{h}I_{h}^{**}}{N_{h}^{**}}, \qquad and \qquad \beta _{2}^{**}=\frac{\beta _{r}I_{r}^{**}}{N_{r}^{**}} \end{aligned}$$
Substituting expression (8) into the force of infection (9) at steady state yields the following polynomial
$$\begin{aligned} p_{1}(\beta _{1}^{**})^{2}+p_{2}\beta _{1}^{**}-p_{3} =0 \end{aligned}$$
where the coefficients \(p_{i}\) for \(i=1\ldots ,3\) of the polynomial are given as
$$\begin{aligned} p_{1}= & {} \mu _{r}{\mathcal {R}}_{r}\left( \Lambda _{h}k_{2}k_{3}+\Lambda _{h}\sigma _{h}k_{3}+\Lambda _{h}\sigma _{h}k_{3}+\Lambda _{h}\sigma _{h}\phi _{h}\right) \\ p_{2}= & {} \mu _{r}{\mathcal {R}}_{r}\left[ \Lambda _{h}k_{1}k_{2}k_{3}+\beta _{h}\Lambda _{h}\sigma _{h}k_{3}+\mu _{r}({\mathcal {R}}_{r}-1)(k_{1}k_{2}k_{3}-\sigma _{h}\tau _{h}\phi _{h})\right] \\ p_{3}= & {} \Lambda _{r}k_{1}k_{2}k_{3} \mu _{r}^{2}{\mathcal {R}}_{r}({\mathcal {R}}_{r}-1) \end{aligned}$$
It can be seen that the coefficient \(p_{1}\) is positive while the sign of \(p_{2}\) and \(p_{3}\) depends on the values of the reproduction number. That is, if \(\left\{ {\mathcal {R}}_{h}, {\mathcal {R}}_{r} \in {\mathcal {R}}_{0}>1 \right\}\), then \(p_{2}>0\) and \(p_{3}>0\). In addition, for \(p_{2}\) to be positive, \(k_{1}k_{2}k_{3}>\sigma _{h}\tau _{h}\phi _{h}\) so that there is at least one sign change in the sequence of coefficients \(p_{1}, p_{2}, p_{3}\). Thus, by Descartes rule of signs, there exists at least one positive real root for (10) whenever \({\mathcal {R}}_{0}>1\). Therefore, the following result is established.
The model system (1) has at least one endemic equilibrium whenever \({\mathcal {R}}_{0}>1\).
Data fitting and parameter estimation
As provided in Table 2, we obtained our data through three different strategies. Model (1) presented has ten parameters, and realistic values for two of these parameters are available in the literature. We further estimate two demographic parameter values from Nigeria, namely natural death and recruitment rate. The natural death is estimated as \(\mu _{h}=\frac{1}{60.45 \times 52}\) per week, where 60.45 years is the average lifespan in Nigeria [18]. Since we assume from model (1) that the total population of human \(N_{h}=\frac{\Lambda _{h}}{\mu _{h}}\), substituting the total population of human given as 214, 028, 302 [18], and the estimated value of \(\mu _{h}\), we obtain the recruitment rate as 68, 088 per week. We obtained five of the parameters by fitting model (1) to the observed cumulative cases of infected human, based on January 2018 to December 2020 Nigerian Lassa fever weekly reported cases. This was obtained from the Nigeria Centre for Disease Control (NCDC) database [19]. Using the mathematical software MATLAB-R2017b, the model fitting was carried out using the nonlinear least square method. The process entails minimizing the sum of the square differences between each observed cumulative confirmed data point and its corresponding confirmed data point from model (1). The root mean square error (RMSE) is sparingly close to zero; this implies that model (1) presented fit very well with the data can be used to make precise predictions for the dynamics of this disease in the populace. All baseline parameter values obtained from fitting the data between 2018 and 2020 are tabulated in Table 2. Furthermore, Fig. 2 depicts the data fitting of the cumulative confirmed cases for 2018, 2019, and 2020, respectively. It must be noted that in Table 2, we presented the estimated mean value of all the parameters; this is defined as the average cumulative cases for the fitted parameter value from 2018 to 2020. The estimated mean value is then used in the next section (except otherwise stated), to carry out sensitivity analysis and to simulate different scenarios of Lassa fever transmission dynamics in Nigeria.
Data fitting of the Lassa fever model (1) using a 2018 cumulative cases; b 2019 cumulative cases; c 2020 cumulative cases
Table 2 Values of the parameters of the Lassa fever model (1)
In this section, we further explore the impact of each parameters to the transmission dynamics of Lassa fever in Nigeria. To achieve this, we carried out a sensitivity analysis to determine the effect of each threshold quantity parameter, using the given data presented in Table 2. The sensitivity indices were obtained by employing the approach in [21, 22]. The sensitivity indices value of each parameters is presented in Table 3, with their respective reproduction number, using the threshold quantity obtained in (5). Furthermore, we present the bar plot of the sensitivity indices in Fig. 3.
Sensitivity indices of the Lassa fever reproduction number \({\mathcal {R}}_{0}\) with respective to each year parameter value : a 2018 parameter values as given in Table 2; b 2019 parameter values as given in Table 2; c 2020 parameter values as given in Table 2; d Mean parameter values as given in Table 2
Table 3 Sensitivity indices of the reproduction number parameters
Since we employ the use of the estimated mean value parameters as our baseline parameter values in predicting the dynamics of Lassa fever in Nigeria, we discuss the interpretation of the sensitivity indices by using the estimated mean value indices. The result shows that the transmission probability from human to human \(\beta _{h}\), and the transmission probability from rodents to humans and rodents \(\beta _{r}\) have the highest positive index with the value 0.6887 and 0.3113, respectively. The positive value implies that decrease (or increase) by \({\mathcal {H}}\%\) in the transmission probability of infection from humans to humans \(\beta _{h}\), or the transmission probability from rodents to humans and rodents \(\beta _{r}\) will decrease (or increase) the reproduction number. Likewise, the recovery rate of human \(\phi _{h}\), and the natural death rate of rodents \(\mu _{r}\) have the highest negative index with the value \(-\) 0.6591 and \(-\) 0.3113, respectively. The negative value infers that increase by \({\mathcal {H}}\%\) in the recovery rate of human \(\phi _{h}\), or the natural death rate of rodents \(\mu _{r}\) will decrease the reproduction number by \({\mathcal {H}}\%\) and vice versa. An epidemiological insight from this result is that any control strategy that reduces the transmission of infection from humans or rodents \((\beta _{h}, \beta _{r})\) , respectively, and control strategy that increases the recovery rate of human \(\phi _{h}\) and the death of rodents \(\mu _{r}\) will efficiently shorten the spread of Lassa fever disease in Nigeria. A good example of such control strategy is an effective human hygiene and behavior to reduce the transmission probability of the disease. Also, elimination of infected rodents using any accessible rat killer (such as rodenticides and rat traps), as it is evident that increase in natural death of rodents decreases the reproduction number of the disease.
In this section, we present the results of the numerical simulation of our model and its mathematical analysis. These results established our analytical result and findings. We explored the dynamical behavior of infected human and rodent population under different scenarios, using the information from the sensitivity analysis results. We simulated model (1) using MATLAB Solver ode45, which is a six-stage fifth-order Runge–Kutta method. It is imperative to mention that we considered the total infected human population as the sum of both the exposed human and infectious human \((E_{h}+I_{h})\). Furthermore, we use the estimated mean value of the parameters given in Table 2 as the baseline parameter value, except otherwise stated.
2-D Contour plot of the reproduction number \({\mathcal {R}}_{0}\) of Lassa fever model (1), varying recovery rate of humans with respect to transmission probability rate from humans. Parameter values used are as given in Table 2 except for \(\delta _{h}=0.4887\) so that \({\mathcal {R}}_{0}=0.7461 < 1.\)
Simulations of model (1) with varying effects of parameters on total infected humans population \((E_{h}+I_{h})\): a transmission probability from humans to humans \(\beta _{h}=0.084 ({\mathcal {R}}_{0}=1.91)\), \(\beta _{h}=0.042 ({\mathcal {R}}_{0}=1.25)\), and \(\beta _{h}=0.021 ({\mathcal {R}}_{0}=0.92)\); b recovery rate of humans \(\phi _{h}=0.123 ({\mathcal {R}}_{0}=1.91),\) \(\phi _{h}=0.062 ({\mathcal {R}}_{0}=1.90)\), \(\phi _{h}=0.031 ({\mathcal {R}}_{0}=1.89)\); Other parameter values used are as given in Table 2
Figure 4 illustrates a 2-D contour plot which shows the dynamics of the reproduction number, by varying the recovery rate of humans with respect to transmission probability rate from human to human \(\beta _{h}\). Increase in the transmission probability rate from human to human increases the reproduction number. For instance, if we fix the recovery rate of humans (x-axis) \(\phi _{h}\) to be 0.4, a transmission probability from human (y-axis) at 0.3 yields a reproduction number between (1, 1.2), while when the transmission probability from human is 0.9, it produces a reproduction number between (1.6, 1.8). Furthermore, increase in the recovery rate of humans \(\phi _{h}\) reduces the reproduction number. For instance, if we fix the transmission probability from humans (y-axis) \(\beta _{h}\) to be 0.5, a recovery rate of human (y-axis) at 0.4 yields a reproduction number between (1.2, 1.4), while when the recovery rate of human is 0.8, it produces a reproduction number between (1, 1.2). From the results, it can be suggested that to reduce the reproduction number of the disease below unit, a control strategy that facilitates good and speedy recovery rates of humans, together with a reduction of the transmission rate between humans, will be sufficient to curtail the disease in the population.
In Fig. 5, we demonstrate the effects of the transmission probability from human to human \(\beta _{h}\) and recovery rate of infected humans \(\phi _{h}\), on the infected human population using three different parameter values, respectively. Figure 5a depicts the effects of \(\beta _{h}\) on the dynamics of infected human population. The results show that decrease in transmission rate of the disease from human to human decreases the infected human population. For instance, when \(\beta _{h}=0.084\), and \(\beta _{h}=0.042\), the reproduction number yields \({\mathcal {R}}_{0}=1.91\) and \({\mathcal {R}}_{0}=1.25\) , respectively, leaving the disease at her endemic state. However, decreasing \(\beta _{h}=0.021\) drives the infected population to her disease-free equilibrium \({\mathcal {R}}_{0}=0.92 <1\). This is the point at which the disease can be curtail in the population, as described in Lemma 3. In Fig. 5b, the result illustrates that an increase in recovery rate of humans increases the total infected human population, given that the transmission probability rate remains at the baseline value. Interestingly, increase in the recovery rate of human is not enough alone to reduce the disease in the population as this can be seen in the estimated reproduction number reported. This can be traced to the effect of the loss of immunity in the recovered individuals. Since the model assumption allows reinfection of recovered humans, when recovery rate increases with a stability of high transmission rate, such dynamics is expected in the population. Hence, it is important to reduce the transmission rate of the disease in order to make the recovery rate of human an effective control strategy to curtail the disease in the population.
In order to see the effects of the transmission probability of rodent \(\beta _{r}\) and natural death rate of rodent \(\mu _{r}\), on the infected rodent population, we simulate the infected rodent population using different parameter values of \(\beta _{r}\) and \(\mu _{r}\) , respectively, in Fig. 6. It can be seen in Fig. 6a that decrease in the transmission probability rate of rodents leads to a decrease in the infected rodent population. For instance, at the baseline parameter value \(\beta _{r}=0.037\), the respective reproduction number yields \({\mathcal {R}}_{0}=1.91\), which draws the final size of the infected rodent population close to 500. However, decreasing the value of \(\beta _{r}=0.009\) reduces the reproduction number \({\mathcal {R}}_{0}=1.46\), which leads to a decrease in infected rodent population. Thus, by limiting the transmission rate of infection between the rodents, the infected rodent population can be reduced to a minimum size. Figure 6b illustrates the effect of natural death rate \(\mu _{r}\) on the infected rodent population. It was shown that upsurge in natural death rate of the rodents decreases the final size of the infected rodent population. For example, at \(\mu _{r}=0.016\), the final infected rodent population size lies between the boundary of \(\left( 4000, 5000\right)\), while increase in the natural death of rodent \((\mu _{r}=0.063)\) reduces the final infected rodent population size below 1000. This result implies that, by reducing the number of rodents in the environment, the infected rodent population can be reduced.
Simulations of model (1) with varying effects of parameters on infected rodents population: a transmission probability from rodents to rodents and humans \(\beta _{r}=0.037 ({\mathcal {R}}_{0}=1.91)\), \(\beta _{h}=0.019 ({\mathcal {R}}_{0}=1.61)\), and \(\beta _{r}=0.009 ({\mathcal {R}}_{0}=1.46)\); b natural death rate of rodents \(\mu _{r}=0.063 ({\mathcal {R}}_{0}=1.91),\) \(\mu _{r}=0.031 ({\mathcal {R}}_{0}=2.49)\), \(\mu _{r}=0.016 ({\mathcal {R}}_{0}=3.68)\). Other parameter values used are as given in Table 2
Convergence of solution trajectories for infected humans \((E_{h}+I_{h})\) with different initial sizes. Parameter values used are as given in Table 2 except for a \(\beta _{h}=1.6889\) so that \({\mathcal {R}}_{0}=3.22 > 1.\); b \(\beta _{h}=0.2815, \sigma _{h}=0.3701, \mu _{r}=0.1882, \beta _{r}=0.0124\) so that \({\mathcal {R}}_{0}= 0.50< 1.\)
The results in Fig. 7 show the convergence of solution trajectories for the infected humans. This entails using different initial sizes of the population to illustrate the stability behavior of the infected human population size, under little or large perturbation. Figure 7a depicts the stability of the endemic state of the disease when \({\mathcal {R}}_{0}>1\), while Fig. 7b illustrates the stability of the disease-free equilibrium of the model. A simple interpretation of this result is that the infected human population equilibrium will remain the same regardless of any changes in the size of the subpopulation.
In this study, we formulated a deterministic model using systems of ordinary differential equations to investigate the transmission dynamic of Lassa fever in the population. The population was stratified into human and rodent compartment. The developed model was parameterized by using cumulative reported data obtained from NCDC. Results show that the root mean square error is sparingly close to zero; this implies that the proposed model fit well with the data and can be used to make accurate predictions for the dynamics of this disease in Nigeria. We establish that the LFFE of the model is locally asymptotically stable if the threshold quantity \({\mathcal {R}}_{0}<1\), and unstable otherwise. We further carried out a sensitivity analysis for the reproduction number to determine the influence of each parameter to the transmission dynamics of Lassa fever in Nigeria. The result proves that the most influential parameters on the reproduction number are the transmission probability rate from human to human \(\beta _{h}\), recovery rate of infected humans \(\phi _{h}\), transmission probability rate from rodents to humans and rodents \(\beta _{r}\), and the natural death rate of rodents \(\mu _{r}\). Following this result, numerical simulations were carried out to explore the effect of the most sensitive parameters on the infected human population and rodent population, respectively. Overall, the results from this study suggest that any control strategies that decrease the number of rodent populations, and the transmission probability rate from rodents to humans and rodents, will advance the control of Lassa fever in the population.
Since Lassa fever is endemic in some regions of Africa, it is important to quantify the re-occurrence of this disease outbreak in Nigeria, due to the growth in the reported cases over years. Using the assembled data from 2018 to 2020, we will forecast a future epidemic outbreak using mathematical and computational models. Furthermore, in order to predict the eradication of Lassa fever in Nigeria, we will explore the advantage of multiple control strategies in curtailing this disease in the population. This will be achieved by modifying model (1) with the optimal control problem using Pontryagin's maximum principle. In addition to that, since controlling and eradicating any form of diseases in a large population can be both severe and expensive, we will employ the use of a cost-effective analysis to investigate the most cost-effective strategy suitable for use, among various combination of the control strategies.
All data supporting the findings of this study are included in the list of references and can be obtained at the Nigeria Centre for Disease Control (NCDC) http://www.ncdc.gov.ng/reports.
LF:
LV:
Lassa virus
LFFE:
Lassa fever free equilibrium
NCDC:
Nigeria Centre for Disease Control
MATLAB:
World Health Organization: Infectious diseases. https://www.who.int/topics/infectiousdiseases/en/. Accessed 9 Feb 2021
Onah, I.S., Collins, O.C.: Dynamical system analysis of a Lassa fever model with varying socioeconomic classes. J. Appl. Math. 2020 (2020)
Mariën, J., Borremans, B., Kourouma, F., Baforday, J., Rieger, T., Günther, S., Magassouba, N., Leirs, H., Fichet-Calvet, E.: Evaluation of rodent control to fight Lassa fever based on field data and mathematical modelling. Emerg. Microbes Infections 8(1), 640–649 (2019)
Olugasa, B.O., Odigie, E.A., Lawani, M., Ojo, J.F., et al.: Development of a time-trend model for analyzing and predicting case-pattern of Lassa fever epidemics in Liberia, 2013–2017. Ann. Afr. Med. 14(2), 89 (2015)
Musa, S.S., Zhao, S., Gao, D., Lin, Q., Chowell, G., He, D.: Mechanistic modelling of the large-scale Lassa fever epidemics in Nigeria from 2016 to 2019. J. Theor. Biol. 493, 110209 (2020)
Bakare, E., Are, E., Abolarin, O., Osanyinlusi, S., Ngwu, B., Ubaka, O.N.: Mathematical modelling and analysis of transmission dynamics of Lassa fever. J. Appl. Math. 2020 (2020)
Zhao, S., Musa, S.S., Fu, H., He, D., Qin, J.: Large-scale Lassa fever outbreaks in Nigeria: quantifying the association between disease reproduction number and local rainfall. Epidemiol Infection 148(2020)
Akinpelu, F., Ojo, M.: A mathematical model for the dynamic spread of infection caused by poverty and prostitution in Nigeria. Int. J. Math. Phys. Sci. Res. 4, 33–47 (2016)
Akinpelu, F., Ojo, M.: Mathematical analysis of effect of isolation on the transmission of Ebola virus disease in a population. Asian Res. J. Math. 1–12 (2016)
Dachollom, S., Madubueze, C.E.: Mathematical model of the transmission dynamics of Lassa fever infection with controls. Math. Model Appl. 5, 65–86 (2020)
Fichet-Calvet, E., Rogers, D.J.: Risk maps of Lassa fever in west Africa. PLoS Negl. Trop Dis. 3(3), 388 (2009)
Jain, S., Atangana, A.: Analysis of Lassa hemorrhagic fever model with non-local and non-singular fractional derivatives. Int. J. Biomath. 11(08), 1850100 (2018)
Lakshmikantham, V., Leela, S., Martynyuk, A.A.: Stability Analysis of Nonlinear Systems. Springer, Berlin (1989)
Ojo, M., Akinpelu, F.: Lyapunov functions and global properties of seir epidemic model. Int. J. Chem. Math. Phys. 1(1) (2017)
Oke, S.I., Ojo, M.M., Adeniyi, M.O., Matadi, M.B.: Mathematical modeling of malaria disease with control strategy. Commun. Math. Biol. Neurosci. 2020 (2020)
Diekmann, O., Heesterbeek, J.A.P., Metz, J.A.: On the definition and the computation of the basic reproduction ratio r 0 in models for infectious diseases in heterogeneous populations. J. Math. Biol. 28(4), 365–382 (1990)
Gbadamosi, B., Ojo, M.M., Oke, S.I., Matadi, M.B.: Qualitative analysis of a dengue fever model. Math. Comput. Appl. 23(3), 33 (2018)
Central Intelligence Agency: The world factbook. https://www.cia.gov/the-world-factbook/countries/nigeria/
Nigeria Centre for Disease Control: Weekly epidemiological report. https://ncdc.gov.ng/reports/weekly
Loyinmi, A.C., Akinfe, K.T., Ojo, A.A.: Qualitative analysis and dynamical behavior of a Lassa Haemorrhagic fever model with exposed rodents and saturated incidence rate (2020)
Ojo, M., Gbadamosi, B., Olukayode, A., Oluwaseun, O.R.: Sensitivity analysis of dengue model with saturated incidence rate. Open Access Lib. J. 5(03), 1 (2018)
Ojo, M., Akinpelu, F.: Sensitivity analysis of Ebola virus model. Asian Res. J. Math. 1–10 (2017)
Department of Ecology and Evolutionary Biology, University of Kansas, Lawrence, USA
Mayowa M. Ojo
Department of Computer Sciences, Landmark University, Omu-Aran, Kwara State, Nigeria
B. Gbadamosi
Institute for Computational and Data Sciences, University at Buffalo, State University of New York, Albany, USA
Temitope O. Benson
Department of Physical Sciences, Landmark University, Omu-Aran, Kwara State, Nigeria
O. Adebimpe
Department of Microbiology, Landmark University, Omu-Aran, Kwara State, Nigeria
A. L. Georgina
BG, TOB, OA, ALG, and MMO participated in drafting the manuscript. BG and MMO developed the model. MMO analyzed the model and carried out the model fitting and simulations, while TOB and MMO interpreted and discussed the numerical results. All authors read and approved the final manuscript.
Correspondence to Mayowa M. Ojo.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Ojo, M.M., Gbadamosi, B., Benson, T.O. et al. Modeling the dynamics of Lassa fever in Nigeria. J Egypt Math Soc 29, 16 (2021). https://doi.org/10.1186/s42787-021-00124-9
Reproduction number
Model fitting
|
CommonCrawl
|
Is there an analogue of coherence between data sets related by a non-linear transformation?
Calculating the coherence (sometimes called magnitude squared coherence) between two signals indicates the presence or lack thereof of a linear transformation between the two signals. Is there an analogue for signals which are related by a non-linear transformation? Let's assume that the expected non-linear transformation is known. It would be even better if the calculation could identify the non-linear transformation within some bounds, but that is probably just wishful thinking.
An example of coherence between ocean water level (due to tides) and ground water level measured in a well, taken from the Wikipedia article linked above. This is an example of the coherence between two signals related by a linear transformation.
fourier-transform frequency-spectrum transform coherence
Chris MuellerChris Mueller
Are you interested in coherence's ability to discriminate correlations at different frequencies, or are you interested in any measure of non-linear correlation? Coherence is basically cross-correlation in the frequency domain.
Mutual information can measure non-linear correlations, but says nothing about the form of the correlation. It's also difficult to estimate accurately without large amounts of data (kernel density estimation is a popular tool to help with that right now, though I think cupolas have begun to gain traction).
I don't know that anybody has done so, but you could alternatively use one signal to predict another using an artificial neural network with one hidden layer. Your correlation metric could then be something like an RMSE between predicted and target signals. It feels like a hack, though. With some work you could try to infer something about the relationship type based on the trained weights of the network. I'm not an expert, but I don't know of a formal framework to actually do that. It's also not an established metric, so you have to also do the work to convince people that it's meaningful (much more difficult if you can't cite sources that did it before you).
dpbontdpbont
Not the answer you're looking for? Browse other questions tagged fourier-transform frequency-spectrum transform coherence or ask your own question.
Question on spatial sampling for non-linear and/or non-uniform arrays
Is there a general relation between mean power and period
Using spectral coherence for computing similarity between time series data
Statistical significance of coherence values
$\mathrm{FFT}$ generates non-linear noise
Why is not Fourier Transform Good for Non-linear Processes
Compute phase coherence between two eeg signals
Why does the addition of noise improve coherence between two signals?
How is frequency related to data rate?
Coherence and non-linear interactions
|
CommonCrawl
|
Analysis of dynamic properties on forest restoration-population pressure model
Mingzhu Qu 1 ,
Chunrui Zhang 1 ,
Xingjian Wang 2 , ,
Department of Mathematics, Northeast Forestry University, Harbin 150040, China
College of Information and Computer Engineering, Northeast Forestry University, Harbin 150040, China
On the basis of logistic models of forest restoration, we consider the influence of population pressure on forest restoration and establish a reaction diffusion model with Holling II functional responses. We study this reaction diffusion model under Dirichlet boundary conditions and obtain a positive equilibrium. In the square region, we analyze the existence of Turing instability and Hopf bifurcation near this point. The square patterns and mixed patterns are obtained when steady-state bifurcation occurs, the hyperhexagonal patterns appears in Hopf bifurcation.
forest restoration,
population pressure,
Turing instability,
Hopf bifurcation,
Citation: Mingzhu Qu, Chunrui Zhang, Xingjian Wang. Analysis of dynamic properties on forest restoration-population pressure model[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 3567-3581. doi: 10.3934/mbe.2020201
[1] R. Brown, J. Agee, J. F. Franklin, Forest restoration and fire: principles in the context of place, Conserv. Biol., 18 (2004), 903-912.
[2] C. Ravenscroft, R. Scheller, D. Mladenoff, M. A. White, Forest restoration in a mixed-ownership landscape under climate change, Ecol. Appl., 20 (2010), 327-346.
[3] H. Bateman, D. Merritt, J. Johnson, Riparian forest restoration: Conflicting goals, trade-offs, and measures of success, Sustainability, 4 (2012), 2334-2347.
[4] S. Peng, Y. Hou, B. Chen, Establishment of Markov successional model and its application for forest restoration reference in Southern China, Ecol. Modell., 221 (2010), 1317-1324.
[5] T. Aide, J. Cavelier, Barriers to lowland tropical forest restoration in the Sierra Nevada de Santa Marta, Colombia, Restor. Ecol., 2 (1994), 219-229.
[6] R. Chazdon, Tropical forest recovery: Legacies of human impact and natural disturbances, Perspect. Plant Ecol. Evol. Syst., 6 (2003), 51-71.
[7] A. Okubo, Diffusion and Ecological Problems: Mathematical Models, Springer Verlag, New York, (1980).
[8] C. Zhang, A. Ke, B. Zheng, Patterns of interaction of coupled reaction-diffusion systems of the FitzHugh-Nagumo type, Nonlinear Dyn., 97 (2019), 1451-1476.
[9] K. Jesse, Modelling of a diffusive Lotka-Volterra-System: The climate-induced shifting of tundra and forest realms in North-America, Ecol. Modell., 123 (1999), 53-64.
[10] Y. Svirezhev, Lotka-Volterra models and the global vegetation pattern, Ecol. Modell., 135 (2000), 135-146.
[11] M. Acevedo, M. Marcano M, R. Fletcher, A diffusive logistic growth model to describe forest recovery, Ecol. Modell., 244 (2012), 13-19.
[12] E. Holmes, M. Lewis, J. Banks, R. R. Veit, Partial differential equations in ecology: Spatial interactions and population dynamics, Ecology, 75 (1994), 17-29.
[13] P. Vitousek, Beyond global warming: Ecology and global change, Ecology, 75 (1994), 1861-1876.
[14] C. Nunes, J. Auge, Land-use and Land-cover Change (LUCC): Implementation Strategy, International Geosphere-Biosphere Programme, Environmental Policy Collection, 1999.
[15] T. Houet, P. Verburg, T. Loveland, Monitoring and modelling landscape dynamics, Landscape Ecol., 25 (2010), 163-167.
[16] H. Pereira, P. Leadley, V. ${\rm{Proen}}\mathop {\rm{c}}\limits_ \cdot {\rm{a}}$, R. Alkemade, J. P. W. Scharlemann, J. F. Fernandez-Manjarres, et al., Scenarios for global biodiversity in the 21st century, Science, 330 (2010), 1496-1501.
[17] T. Chase, R. Pielke, T. Kittel, R. R. Nemani, S. W. Running, Simulated impacts of historical land cover changes on global climate in northern winter, Clim. Dyn., 16 (2000), 93-105.
[18] R. Houghton, J. Hackler, K. Lawrence, The US carbon budget: contributions from land-use change, Science, 285 (1999), 574-578.
[19] E. Lambin, B. Turner, H. Geist, S. B.Agbola, A. Angelsen, J. W. Brucee, et al., The causes of land-use and land-cover change: Moving beyond the myths, Global Environ. Change, 11 (2001), 261-269.
[20] R. Chazdon, M. Guariguata, Natural regeneration as a tool for large-scale forest restoration in the tropics: Prospects and challenges, Biotropica, 48 (2016), 716-730.
[21] T. Crk, M. Uriarte, F. Corsi, D. Flynn, Forest recovery in a tropical landscape: What is the relative importance of biophysical, socioeconomic, and landscape variables?, Landscape Ecol., 24 (2009), 629-642.
[22] J. Chinea, Tropical forest succession on abandoned farms in the Humacao Municipality of eastern Puerto Rico, For. Ecol. Manage., 167 (2002), 195-207.
[23] C. Chien, M. Chen, Multiple bifurcations in a reaction-diffusion problem, Comput. Math. Appl., 35 (1998), 15-39.
[24] W. Jiang, H. Wang, X. Cao, Turing instability and Turing-Hopf bifurcation in diffusive Schnakenberg systems with gene expression time delay, J. Dyn. Differ. Equations, 31 (2019), 2223-2247.
[25] R. Fisher, The wave of advance of advantageous genes, Ann. Hum. Genet., 7 (1937), 355-369.
[26] Z. Ju, Y. Shao, W. Kong, X. Ma, X. Fang, An impulsive prey-predator system with stage-structure and Holling II functional response, Adv. Differ. Equations, 2014 (2014), 280.
[27] S. Madec, J. Casas, G. Barles, C. Suppo, Bistability induced by generalist natural enemies can reverse pest invasions, J. Math. Biol., 75 (2017), 543-575.
Mingzhu Qu
Chunrui Zhang
Xingjian Wang
Mingzhu Qu, Chunrui Zhang, Xingjian Wang. Analysis of dynamic properties on forest restoration-population pressure model[J]. Mathematical Biosciences and Engineering, 2020, 17(4): 3567-3581. doi: 10.3934/mbe.2020201
|
CommonCrawl
|
Research article | Open | Open Peer Review | Published: 22 November 2018
Using a GIS to support the spatial reorganization of outpatient care services delivery in Italy
Martina Calovi ORCID: orcid.org/0000-0002-2317-11901 &
Chiara Seghieri2
Studying and measuring accessibility to care services has become a major concern for health care management, particularly since the global financial collapse. This study focuses on Tuscany, an Italian region, which is re-organizing its inpatient and outpatient systems in line with new government regulations. The principal aim of the paper is to illustrate the application of GIS methods with real-world scenarios to provide support to evidence-based planning and resource allocation in healthcare.
Spatial statistics and geographical analyses were used to provide health care policy makers with a real scenario of accessibility to outpatient clinics. Measures for a geographical potential spatial accessibility index using the two-step floating catchment area method for outpatient services in 2015 were calculated and used to simulate the rationalization and reorganization of outpatient services. Parameters including the distance to outpatient clinics and volumes of activity were taken into account.
The spatial accessibility index and the simulation of reorganization in outpatient care delivery are presented through three cases, which highlight three different managerial strategies. The results revealed the municipalities where health policy makers could consider a new spatial location, a shutdown or combining selected outpatient clinics while ensuring equitable access to services.
A GIS-based approach was designed to provide support to healthcare management and policy makers in defining evidence-based actions to guide the reorganization of a regional health care delivery system.
The analysis provides an example of how GIS methods can be applied to an integrated framework of administrative health care and geographical data as a valuable instrument to improve the efficiency of healthcare service delivery, in relation to the population's needs.
To address the financial pressure resulting from the economic crisis in Europe (2008-date), and with the growing proportion of the elderly in the population (24% aged 60 or over) [1], policy makers are increasingly focusing on finding an acceptable balance between adequate cost control measures and ensuring that the population continues to receive high-quality, appropriate and efficient care [2].
The European Observatory suggests that policy makers should ascertain which tools respond best to the needs and consider the impact of the reforms they propose to achieve better health system goals [2]. These include financial protection, efficiency, equity, quality, responsiveness, transparency, and accountability [2, 3].
The Italian national health system follows the Beveridge model [4, 5] and provides universal coverage for comprehensive and essential health services through general taxation, recognizing health as a fundamental right of individuals and of collective interest for society. In the early 1990s, national reforms were implemented to transfer several key administrative and organizational responsibilities from central government to the 20 Italian regional administrations, thus making regional authorities more sensitive to controlling expenditure and promoting efficiency, quality, and patient satisfaction. The regional administrations are now responsible for organizing and delivering health services through local health units [6].
Like other EU countries, Italy has taken legislative measures and implemented a set of policy initiatives to respond to the economic crisis, through agreements between the national and regional governments, health pacts, and through annual finance acts and other legislative measures to control public spending [6]. The overall aim is to rationalize and to reorganize the health services by defining quality, structural, and technological standards. These include the expected catchments areas for each care discipline, and volume thresholds for specific surgical procedures (Health Minister Decree 20 April 2015). Regional governments are therefore attempting to balance the need for national standards, while providing an equitable geographical resource distribution.
Thus, the main objective of this research is to provide evidence on how a geographical information system (GIS) can be used to support health care management in the reorganization of care service delivery, by computing and analyzing spatial access to outpatient services relative to the demand.
GIS facilitates a unique geographic approach to a variety of problems by assessing the current distribution of necessary resources and anticipated future needs. As most health phenomena can be described spatially, GIS can be effectively used as a decision support system, not only in health care management but also in addressing urban planning, transportation, energy, and resource analyses. GIS technologies highlight the relationships between different kinds of data, enabling complex data to be interpreted in an easier and more realistic manner [7].
GIS has become essential in the health and healthcare sector in recent years. Extensive literature examines the use of GIS technology in epidemiology, and typically analyzes the relationships between locations, environments, and diseases, studing their distribution and monitoring their dissemination over space and time [8].
Digital health care data have led to much interest in geospatial analyses in the healthcare management research field. Hilton et al. [9] outlined in 2005 a combination of GIS and human health applications within decision-making, using Anthony's Model [10], and defined this approach as "the management of people, assets, and services using spatial information to ensure the delivery of the health care service while assuring that specific tasks are carried out effectively and efficiently" [9].
The understanding of the spatial distribution of services is important for the improvement of the efficiency of care delivery organizations, while providing citizens with healthcare services that optimally respond to their needs. Creating an integrated framework of administrative health care and geographical data enables planners and managers to visualize and examine the impact of government policy decisions [11] over the territory. In particular, GIS approaches, spatial statistics analyses, and the measurement of accessibility indexes can be useful to inform policy makers on locating population needs, understanding where they are not covered by the local supply of services [12], or where there may be an oversupply.
Indeed, studying and measuring access to care services has been a major concern for health care management since the early 1970s [13,14,15,16,17,18,19,20]. Today, the importance of measuring spatial access to care services is well known and, as Guatam et al. stated in 2014, these kinds of measurements are important instruments that can help care managers provide more respondent services and to reduce spatial inequalities [21]. The importance of a spatial analysis has thus been established. Gautam et al. noted in 2014 that in the U.S. "effectively evaluating spatial healthcare disparity is crucial to improve health care access" [21].
In this context, the current distribution of the outpatient services in the Tuscany region (Italy) is analyzed, in relation to the population and to the current demand for services, using a comprehensive geographical health information infrastructure. The aim is to support the regional administration and care managers in terms of improvements in governance and equitable outpatient service delivery.
More broadly, our study addresses the question "how can an integrated framework of administrative health care and geographical data support the health care management in the reorganization, redesign and planning of health care services?"
The two-step floating catchment area (2SFCA) method is used to define the spatial access to outpatient services, based on the availability and accessibility of the outpatient facilities, and to examine access to the current service use in Tuscany in 2015. Oversupplies or lack of supplies are identified based on spatial criteria. This knowledge can help reorganize the care services delivery system more efficiently, and ensure universal and equitable access for the entire population.
The paper is divided into five sections. Section "BACKGROUND" outlines the methodology, presenting the spatial distribution of the outpatient clinics and the two-step floating catchment area method. Section "METHODS" outlines the data used, the demand, the supply, and the unit of analysis. Section "RESULTS" provides the results and presents the closure simulation. Section "DISCUSSION" outlines the discussion and the last section "CONCLUSIONS" presents the conclusions.
Spatial accessibility can be regarded as the balance between supply and demand within a geographical area, and we chose the 2SFCA method to compute the potential spatial accessibility index [22, 23]. The integrated analysis of this method combines distances and the supply-demand ratio, and is considered one of the best methods to measure potential spatial accessibility to health care services, through establishing the geographical location of services based on population needs [12, 24, 25].
The framework proposed in this study consists of three stages. First, the potential spatial accessibility index is implemented. Second, the potential reorganization of the outpatient services is simulated based on the results of the first step, and a new management solution is then suggested. The third and last step consists of computing a new potential spatial accessibility index to verify the repercussions of the managerial strategies throughout the outpatient system.
This integrated geographical framework combines administrative health care and geographical data, and can help health care planners visualize the impact of their policies.
The 2SFCA method requires the key data elements of population data (demand), healthcare service locations, and volumes data (supply), and the measured distance between demand and supply.
The data used for the analysis include individual-level administrative care data on all residents of Tuscany. An anonymous ID is assigned to each patient, which enables residents in Tuscany to be tracked in terms of access and use of any healthcare services. A regional administrative health care dataset of outpatient specialized visits in 2015 has been used in this research.
From the original dataset, the main clinical disciplines that patients could access were extracted: cardiology, gynecology, neurology, orthopedics, otorhinolaryngology, ophthalmology, dermatology, urology, and general surgery. Consultations provided at ERs have not been included.
All the statistical analyses were run using SAS version 9.4 (SAS Institute), and the geographical analyses were run using ArcMap version 10.3.1 (ESRI).
The study focuses on Tuscany, an Italian central region (Fig. 1). The Tuscan regional administration is subdivided into 280 municipalities that belong to 10 provinces. Since January 1, 2016, regional law 84/2015 splits the regional healthcare system into three new local health authorities (LHAs), which combine four teaching hospitals and the 12 previous LHAs.
Italy and Tuscany Region [The figure is created by the authors and is not taken from other sources]
The overall territory is divided into three "large areas" and 25 health districts, which are in charge of organizing and delivering the services of territorial health networks, social care, and social integration.
All the analyses were run at the municipality level, which is the smallest administrative unit, as the scale of analysis in the study is the smallest level of the territory.
According to the administrative health care database, Tuscany has more than 3.7 million inhabitants, ranging from the smallest municipality of the island of Capraia with 416 citizens, to the largest of Florence with 381,037 inhabitants (Table 1). For each municipality, a georeferenced centroid was defined, combining the residents of each municipality into a corresponding single point.
Table 1 – Population of Tuscany: descriptive statistics
The dasymetric mapping methodology was used to create the population centroids by interpolating residential land use areas, as available on the website of the Tuscan regional administration [26, 27]. The lack of public information on single individual residence addresses prompted this approach, and the methodology was chosen as it enables the distribution of the aggregated population data within each unit of analysis to be better estimated.
The selection criteria used to analyze the original dataset revealed that there were 246 outpatient clinics throughout Tuscany in 2015, which were accessed by patients more than two million times. Each clinic was integrated into a GIS environment, by geolocating the clinics by their addresses over the 280 municipalities of Tuscany. Of these municipalities, 120 have no outpatient clinics under their jurisdiction.
The total number of visits provided by each outpatient clinic for this study represented the health services delivery for each municipality, which range from 106 to 145,956. On average, the 246 outpatient clinics provided 8358 specialized visits in 2015.
Studies on accessibility either use Euclidean/linear distance or travel time distance. The Tuscan regional road network, available on the Open Toscana website, was used to calculate the travel time distances. The regional road network dataset is composed of linear elements (arches) that represent all the segments of each road and that are defined by points (nodes) at the junctions. The original dataset has much missing data. To avoid errors during the analyses, information on travel speed and road type have been interpolated, to fill in all the missing records and obtain a complete dataset. To calculate the distances between the centroids of the municipalities (origins) and the outpatient clinics (destinations), an OD matrix was computed. The mean distance is about 73 min, the minimum distance is equal to a few minutes, and the maximum distance is about 3 h.
Two-step floating catchment area method
The 2SFCA method, based on the spatial decomposition concept of Radke and Mu [28], has been chosen to conduct the potential spatial accessibility index. This methodology is selected instead of a more sophisticated gravity model because the results are easily readable and intuitive, and because the algorithm overcomes cross-border limitations [20, 22, 29]. The research aim was to run all the analyses at the most local level to obtain results that best fit the needs of the population's health services. According to this bases and to the analysis level, the threshold of the catchment areas has been set at a travel time of 15 min [21].
In this analysis, supply is represented by the specialized visits supplied by each clinic, while the total resident population for each municipality represents the demand.
A conventional 2SFCA model was used. For each supply location, j, the first step detects all demand locations, k, that are within a threshold time distance d0 from j, and then computes the supply-to demand ratio Rj for the inhabitants within the catchment area:
$$ R\mathrm{j}=\frac{S\mathrm{j}}{\sum_{k\in \left\{d\mathrm{kj}\le d0\right\}}\frac{D\mathrm{k}}{1000}} $$
Where dkj is the distance between k and j, Dk is the demand at location k that falls within the catchment, and finally Sj is the capacity of supply [23].
The second step detects each demand location I and all supply locations j within the threshold time distance d0 from location I, and adds together the supply-to-demand ratios Rj at those locations, to obtain the full accessibility Ai F at demand location i.
$$ {A}_i^F=\sum \limits_{j\in \left\{d\mathrm{ij}\le d0\right\}}R\mathrm{j}=\sum \limits_{j\in \left\{d\mathrm{ij}\le d0\right\}}\left(\frac{S\mathrm{j}}{\sum_{k\in \left\{d\mathrm{kj}\le d0\right\}}\frac{D\mathrm{k}}{1000}}\right) $$
Where dij is the distance between i and j, and Rj is the supply-to-demand ratio at supply location j that falls within the catchment centered at i [22].
Reorganization simulation
The potential spatial accessibility index detects the dynamics of the spatial access to the outpatient clinics, while the reorganization simulation is useful to assess the effects of the possible strategies for restructuring the regional outpatient service network. The simulation focuses on the municipalities with a high potential accessibility index, equal to or over .65. These represent the municipalities in which residents have access to a minimum of .65 specialized visits within 15 min [12]. Each municipality was analyzed in depth, creating specific scenarios to study the supply, demand, and the travel time distances. The clinics from municipalities characterized by high accessibility were subdivided into quintiles of yearly volume of outpatient visits. The units that were part of the first quintile class (low-volume clinics) were considered in the closure simulation. This threshold has been fixed because, to our knowledge, there are no specific standards that identify any minimum threshold for the number of outpatient visits that should be provided in a year.
For each scenario, a strategic solution was proposed that guarantees accessibility to the clinics within the minimum possible time, thus maintaining an efficient service provision. All the simulated managerial solutions focus on reorganizing the distribution of the clinics in municipalities where an oversupply was identified.
The next step involved running a new potential spatial accessibility index based on the updated number of clinics resulting from the closure simulation. The findings, and the differences between the initial and second indexes performed, highlight the repercussions stemming from a clinics' potential closure and can verify if equitable accessibility has been guaranteed to all patients.
To better understand the spatial distribution of the outpatients' clinics, various spatial statistical analyses were run as exploratory tests. The nearest neighbor index was run to measure the degree of spatial dispersion in the distribution based on the minimum inter-feature distances, and the weighted standard deviational ellipse was then computed to check the distribution of these features [12].
The average nearest neighbor returned a ratio between the observed average distance and the expected average distance based on a hypothetical random distribution of the same sample features over the same area. The average distance ratio typically varies from 1, representing high clustering to − 1, meaning that the objects are sparse in the geographical area. Our pattern value was 0.99, and thus the pattern exhibits clustering and a high concentration of clinics in this area. The average distance between two outpatient clinics, given by the expected mean distance, was 6 km [30]. The directional distribution returned the standard deviational ellipse (SDE) (Fig. 2) based on the locations of the clinics and the total number of outpatient visits (ESRI report) as the associated attribute value. The directional trend confirmed the cluster area identified using the point pattern statistics. These techniques enabled us to better understand the regional distribution of the outpatient clinics.
Population distribution by municipalities, Clinics distribution by No. of Visits, SDE [The figure is created by the authors and is not taken from other sources]
Figure 2 shows the distribution of the regional population by municipalities, the distribution of the clinics by number of visits provided, and the SDE. The map suggests a correlation between the two distributions. As Smith et al. [31] highlighted, understanding the spatial distribution of a population is important to ensure adequate spatial accessibility to healthcare services [31].
Figure 3 shows the potential spatial accessibility index, obtained by running the 2SFCA method. The index ranges from .0001 to 1.408, with an average of .415 specialized visits for each citizen within in 15 min' drive [12] and a standard deviation of .325.
Potential spatial accessibility index within 15 min [The figure is created by the authors and is not taken from other sources]
Each value of the index refers to the number of specialized clinics that each person has access to in 15 min within the municipality he or she lives in. Those who live in the municipality with the highest value have access to 1.408 specialized visits within 15 min' drive [12]. In Fig. 3 the shades of green correspond to the range of the index. The light colors refer to low accessibility values and the dark colors to high values.
Figure 4 shows the distribution of the outpatient clinics by number of visits provided, and the municipalities with a spatial accessibility index higher than .65. Out of the total, 17 municipalities are characterized by a high value index. The scenario simulation focuses on three interesting cases, highlighted with red circles, which can provide evidence to decision makers when evaluating the outpatient delivery system. The cases present solutions in which the outpatient clinics could be for example merged or closed, thus increasing efficiency, delivering more visits, and at the same time maintaining an equitable accessibility level.
Municipalities characterized by a potential spatial accessibility index > = .65 [The figure is created by the authors and is not taken from other sources]
Figure 5 illustrates the first scenario. Two municipalities and three outpatient clinics (represented in red and sized by the number of visits provided) that belong to the first quintile class are involved in the potential closure simulation.
First scenario [The figure is created by the authors and is not taken from other sources]
If these three clinics were shut down, the residents of the two municipalities highlighted in Fig. 5 by the striped cover and named A and B, next to the two with the high accessibility values (highlighted by the purple border as Scenario 1), will no longer have access to any clinics within 15 min.
The study does not consider emergency services but focuses only on planned specialized patient visits. As urgent access to clinics is not considered, 5 and 10 min have been added to the catchment area simulation. Catchment areas of 20 (colored in orchid purple) and 25 (colored in dark cabernet purple) driving minutes from both the A and B centroid municipalities were run, to assess how accessibility to the clinics can change if the driving area is expanded. Within 20 min, residents of municipality A can have access to 4 clinics, while residents of municipality B can have access to 1 clinic. Within 25 min, residents of A can have access to 6 clinics and residents of B can have access to 4. As residents of A and B must drive a maximum of 10 min more to have access to specialized visits, we hypothesize shutting down the 3 clinics.
The second scenario is presented in Fig. 6. One municipality is characterized by high accessibility value, labeled as Scenario 2. Six clinics (five represented in red, one in light blue and sized by the number of visits provided) that belong to the first quintile class and have delivered 1133 specialized visits in 2015 are involved in the potential closure simulation.
Second scenario [The figure is created by the authors and is not taken from other sources]
If the six clinics were shut down, the residents of municipality C, (represented in Fig. 6 by the stripes), can no longer have access to any clinic within 15 min. Additionally, 20 (colored orchid purple) and 25 min (colored dark cabernet purple) driving catchment area analyses were run from the centroid of municipality C to establish how many clinics the residents have access to. They still have no access to any clinics within 20 min, while if they drove for 25 min they can access 5 outpatient clinics.
If one of the six clinics (colored light blue in Fig. 6) involved in the simulation remains open, residents of municipality C have access to at least one specialized clinic within 20 min. This solution guarantees more equitable access to the specialized health services of the area.
Figure 7 shows Scenario 3. Eight municipalities and six outpatient clinics (1403 specialized visits provided in 2015) are involved in the simulation. The clinics are highlighted in Fig. 7 in red and blue. If all the 6 clinics were shut down the residents of municipalities D, E, F, and G, highlighted in Fig. 7 by the stripes and the specific labels, no longer have access to any clinics within 15 min. The 20 (colored orchid purple) and 25 min (colored dark cabernet purple) catchment areas have been computed. Within 20 min' drive, residents of municipality D have access to 2 clinics, those in municipality E still have no access, those in municipality F have access to 1 clinic, and those in municipality G do not have access to any clinics. Within 25 min, residents of municipality D have access to 3 outpatient clinics, those in municipality E still have no access, those in municipality F have access to 1 clinic, and those in municipality G have access to 1 clinic.
Third scenario [The figure is created by the authors and is not taken from other sources]
Given these preliminary results, a different managerial strategy is required in this area. The clinics colored blue in Fig. 7 have been merged into 1, allowing all the 4 municipalities D, E, F, and G to have access to at least 1 clinic within 25 min' drive.
The merging strategy fits well with the aim of reducing inefficiencies and over supply services, while maintaining equitable access to specialized clinics.
The closure simulation revealed that 39 clinics can be shut down, thus reducing the total from 246 to 207. The new spatial accessibility index was run based on this new number of clinics, and the results are presented in Fig. 8.
New potential spatial accessibility index within 15 min [The figure is created by the authors and is not taken from other sources]
Comparing Fig. 3 with Fig. 8 allows us to easily identify the differences between the original accessibility index (Fig. 3) and the new index (Fig. 8). The closure simulation revealed that most of the municipalities are covered within 15 min, and only in a few cases must residents travel for a maximum of 25 min to access at least 1 of the specialized outpatient clinics.
The aim of this study is to increase awareness of the importance of implementing a comprehensive geographical health information infrastructure, to support policy makers and healthcare managers with evidence-based solutions for the provision of health care services related to the needs. The study provides possible solutions to assist the planning and restructuring of regional health care services through an analysis of accessibility to outpatient clinics. Ensuring that resources are used effectively and efficiently when complying with regional and national regulations is essential, while also taking into account the population's needs (demand for services) [32].
Using an integrated framework of geographical and administrative healthcare data enabled the identification of the municipalities that need to reorganize their delivery of outpatient services more efficiently by rethinking new clinic allocation, shutting down, or merging clinics. In particular, through the analyzed scenarios, the study identifies where delivery oversupply occurs in the current clinic distribution, which can be interpreted as an overestimation of the real needs of the population.
Seventeen areas with a high potential spatial accessibility index were identified. The analyses revealed that in 7 of these, residents could not access any clinics within 15 min' drive time, which was set as the minimum travel time distance.
In these seven areas, few clinics belong to the first quintile class of volumes (those with low volumes of consultations per year), and thus could be closed. Most of these clinics need to stay open to guarantee equitable access. In the remaining 10 municipalities, all the clinics that belong to the first quintile can be reorganized, through a strategic shut down or merging solution. Out of the original 246 outpatient clinics, 38 could be directly shut down and 2 clinics could be merged. The total number of clinics that the simulation allows to close is 39, guaranteeing the population accessibility to outpatient services in a maximum of 15 to 25 min.
The methodological framework presented can be a valuable instrument for healthcare planners, as it can enable them to understand how outpatient services are organized and distributed. The study suggests ways to improve the efficiency of the delivery network while preserving equitable access.
The research has some limitations. The study had to deal with the problem of population data aggregation. To better shape the spatial distribution of the population, a dasymetric model has been applied to establish the centroids of each municipality, aggregating the data of the population. The dasymetric model helped to reduce the distance errors that can originate from the use of aggregated data. As in all studies that use aggregated data, there may be a statistical bias as a result of the modifiable areal unit problem (MAUP). To minimize the effects of MAUP, the smallest administrative units possible were used for the analyses [35, 36].
In addition, the study did not take into account flows of patients that immigrate into Tuscany. This phenomenon can seriously affect the volumes of the clinics at regional borders. However, not considering external residents' flows does not affect the spatial accessibility in terms of equitable access to the outpatient clinics among Tuscan residents.
The measurement of patients' specialized health care service needs is an important aspect of this study. The number of visits was used as a proxy for the level of need, although it is acknowledged that there is no direct correspondence between need and use [33, 34]. Scholars have proposed healthcare need indices [35], but the lack of an approved health service utilization rate has been identified as a research gap, highlighting the importance of this analysis [33].
Despite these limitations, the study underlines the importance of the integration of geographical components into healthcare administrative data to determine the provision and location of health care services in relation to needs. Indeed, the simulation analysis enables the identification of potential improvements in health care service delivery efficiency and value in health care, through reducing unnecessary cost and waste while maintaining or improving quality and access.
Additionally, this is the first study to our knowledge that focuses on the development of a potential spatial accessibility index in the Italian context, and that provides scenarios for describing and understanding the spatial organization of health care.
A series of health care reforms based on strategic resource allocation have been implemented in Italy, to develop a more efficient delivery system. This system should also reflect the real needs of the population and guarantee equal access to quality care for all. The study originates from the view that to accomplish these tasks any reform must be grounded on solid evidence-based assessments. In addition, the assumption that geographical analyses can be useful in simulating and visualizing the effects of policies and thus legitimizing the introduction of new managerial strategies Informed the research question "How can a GIS support the health care management in the reorganization, redesign and planning of health care services?"
The research has demonstrated the importance of geographical instruments as convenient and easily readable tools in supporting the health care sector. The developed framework addresses the research question, demonstrating the benefits of GIS approaches within decision-making processes, and thus can help health care planners assess the effects of health policy from a geographical perspective [24, 30].
Based on the results, the authors recommend the use of geographical analyses in tackling the challenges of health reform, and to support decision-making processes. In 2015, Neutens highlighted how GIS was becoming increasingly recognized as an important and valuable instrument in mapping the spatial distribution of health care needs. Neutens also showed how GIS tools can be used in monitoring and evaluating the socio-spatial impacts of health policies, which can redress the balance of inequitable access and the disparities of health outcomes [21, 36], and thus gives greater theoretical foundation to this research.
http://www.garanteprivacy.it/web/guest/home/docweb/-/docweb-display/export/1311248
GIS:
2SFCA:
Two-step Floating Catchment Area.
MAUP:
Modifiable Areal Unit Problem
Melorose J, Perroy R, Careas S. World population prospects. United Nations. 2015;1.
Mladovsky P. et al. Health policy responses to the financial crisis in Europe. Policy Summ. 2012:1–38. https://www.lenus.ie/hse/handle/10147/267834.
Clemens T, et al. European hospital reforms in times of crisis: aligning cost containment needs with plans for structural redesign? Health Policy. 2014;117:6–14.
Beveridge W. Social insurance and allied services. HMSO London (1942). doi:Cmd 6404.
Musgrove P. Health insurance: the influence of the beveridge report. Bull World Health Organ. 2000;78:845–6.
Nastasi G, Palmisano G. The impact of the crisis on fundamental rights across member States of the EU - Country Report on Italy, 2015, Policy Department C: Citizens' Rights and Constitutional Affairs, Civil Liberties, Justice And Home Affairs, European Union, PE 510.018
Măzăreanu VP. Using geographical information systems as an information visualization tool. A case study. Ann Alexandru Ioan Cuza Univ Econ. 2013;60:139–46.
Clarke KC, McLafferty SL, Tempalski BJ. On epidemiology and geographic information systems: a review and discussion of future directions. Emerg Infect Dis. 1996;2:85–92.
Brian Hilton, Thomas A. Horan B. T. in Geographic Information Systems in Business (ed. Pick J. B.) 212–235 (2005).
Anthony RN. Planning and control systems: a framework for analysis. Boston: Div. Res. Grad. Sch. Bus. Adm. Harvard Univ; 1965.
Higgs G. A literature review of the use of GIS-based measures of access to health care services. Health Serv Outcomes Res Methodol. 2004;5:119–39.
Ngui AN, Apparicio P. Optimizing the two-step floating catchment area method for measuring spatial accessibility to medical clinics in Montreal. BMC Health Serv Res. 2011;11(166).
Aday LA, Andersen RA. Framework for the study of access to medical care. Health Serv Res. 1974;9:208–20.
Penchansky R, Thomas JW. The concept of access: definition and relationship to consumer satisfaction. Med Care. 1981;19:127–40.
Ricketts TC, Goldsmith LJ. Access in health services research: the battle of the frameworks. Nurs Outlook. 2005;53:274–80.
Levesque J-F, Harris MF, Russell G. Patient-centred access to health care: conceptualising access at the interface of health systems and populations. Int J Equity Health. 2013;12:18.
Khan AA. An integrated approach to measuring potential spatial access to health care services. Socio Econ Plan Sci. 1992;26:275–87.
Luo W, Whippo T. Variable catchment sizes for the two-step floating catchment area (2SFCA) method. Heal. Place. 2012;18:789–95.
Gulliford M, et al. What does 'access to health care' mean? J Health Serv Res Policy. 2002;7:186–8.
Guagliardo MF. Spatial accessibility of primary care: concepts, methods and challenges. Int J Health Geogr. 2004;3:3.
Gautam S, Li Y, Johnson TG. Do alternative spatial healthcare access measures tell the same story? GeoJournal. 2014;79:223–35.
Luo W, Wang F. Measures of spatial accessibility to health care in a GIS environment: synthesis and a case study in the Chicago region. Environ Plan B Plan Des. 2003;30:865–84.
Wang F. Quantitative methods and socio-economic applications in GIS; 2010.
Cromley EJ, Mc Lafferty SL. GIS and Public Health; 2012.
Yang DH, Goerge R, Mullner R. Comparing GIS-based methods of measuring spatial accessibility to health services. J Med Syst. 2006;30:23–32.
Briggs DJ, Gulliver J, Fecht D, Vienneau DM. Dasymetric modelling of small-area population distribution using land cover and light emissions data. Remote Sens Environ. 2007;108:451–66.
Boo G, Fabrikant SI, Leyk S. A novel approach to veterinary spatial epidemiology: dasymetric refinement of the Swiss Dog Tumor Registry Data. ISPRS Ann Photogramm Remote Sens Spat Inf Sci. 2015;II-3/W5(SEPTEMBER):263–9.
John Radke LM. Spatial decomposition, modelling and mapping service regions to predict access to social programs. GeogrInform Sci. 2000;6:105–12.
Fransen K, Neutens T, De Maeyer P, Deruyter G. A commuter-based two-step floating catchment area method for measuring spatial accessibility of daycare centers. Heal Place. 2015;32:65–73.
Wise S, Craglia M (Ed.). GIS and Evidence-Based Policy Making. CRC Press - Taylor & Francis Group; 2008.
Smith CM, Fry H, Anderson C, Maguire H, Hayward AC. Optimising spatial accessibility to inform rationalisation of specialist health services. Int J Health Geogr. 2017;16(15).
Nicholls S. Measuring the accessibility and e quity of public parks: a case study using GIS. Manag Leis. 2001;6:201–19.
Mokgalaka H. Measuring access to primary health care: use of a GIS-Based Accessibility Analysis. In Planning Africa 2014 Conference. 2014. http://researchspace.csir.co.za/dspace/bitstream/10204/7913/1/Mokgalaka_2014.pdf.
McLafferty SL. GIS and health care. Annu Rev Public Health. 2003;24:25–42.
Ursulica TE. The relationship between health care needs and accessibility to health Care Services in Botosani County- Romania. Procedia Environ Sci. 2016;32:300–10.
Neutens T. Accessibility, equity and health care: review and research directions for transport geographers. J Transp Geogr. 2015;43:14–27.
We would like to thank Doctor Viviana Cossi of the Regional Department of Information Systems and services, for her help in geocoding the addresses. We thank all researchers of the Management and Health Laboratory and Professor Sabina Nuti for their help, and Lauren Scott Griffin at Esri for comments and insights that greatly improved the manuscript.
This research was primarily conducted at Management and Healthcare Lab, Sant'Anna School of Advanced Studies, Piazza Martiri della Libertà, 24; 56127 Pisa, Italy.
The authors received no specific funding for this work.
The data used in the study are from administrative health records of the Tuscany Region and are not publicly available. Access to data by the Scuola Superiore Sant'Anna, Pisa (Italy) was allowed within the Decree n°544 of 2010 of the Tuscany Region, and Decree n°157 of 2010 of the Scuola Superiore Sant'Anna, Pisa (Italy).
Geoinformatics and Earth Observation Laboratory, Department of Geography and Institute for CyberScience, The Pennsylvania State University, University Park, PA, USA
Martina Calovi
Management and Healthcare Lab, Institute of Management, Sant'Anna School of Advanced Studies, Piazza Martiri della Libertà, 24, 56127, Pisa, Italy
Chiara Seghieri
Search for Martina Calovi in:
Search for Chiara Seghieri in:
MC contributed to all aspects of the work: manuscript writing, background review, data development and analyses, maps development. CS contributed to manuscript writing, modeling and background review. Both the authors have read and approved the final version of the manuscript.
Correspondence to Martina Calovi.
The administrative data used in the study were anonymized at individual level and the patient's identity and other sensitive data were not disclosed. The study was performed in full compliance with Italian laws on privacy, Decreto Legislativo 30 Giugno 2003, n. 196,Footnote 1 and approval by an Ethics Committee was unnecessary.
Spatial accessibility
Outpatient; closure simulation
|
CommonCrawl
|
Galois Field/Examples/Order 4
< Galois Field/Examples
Example of Galois Field
The algebraic structure $\struct {\GF, +, \times}$ defined by the following Cayley tables is a Galois field:
$\begin{array} {c|cccc} + & 0 & 1 & a & b \\ \hline 0 & 0 & 1 & a & b \\ 1 & 1 & 0 & b & a \\ a & a & b & 0 & 1 \\ b & b & a & 1 & 0 \\ \end{array} \qquad \begin{array} {c|cccc} \times & 0 & 1 & a & b \\ \hline 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 1 & a & b \\ a & 0 & a & b & 1 \\ b & 0 & b & 1 & a \\ \end{array}$
From Field with 4 Elements has only Order 2 Elements we have that a Galois field of order $4$, if it exists, must have this structure:
$\struct {\GF, +}$ is the Klein $4$-group
$\struct {\GF^*, \times}$ is the cyclic group of order $3$.
We have that $4 = 2^2$, and $2$ is prime.
From Galois Field of Order q Exists iff q is Prime Power, there exists at least one Galois field of order $4$.
As $\struct {\GF^*, +, \times}$ is the only such algebraic structure that can be a Galois field, it follows that $\struct {\GF^*, +, \times}$ must be a Galois field.
$\blacksquare$
1969: C.R.J. Clapham: Introduction to Abstract Algebra ... (previous) ... (next): Chapter $4$: Fields: $\S 17$. The Characteristic of a Field: Example $25$
1971: Allan Clark: Elements of Abstract Algebra ... (previous) ... (next): Chapter $3$: Field Theory: Definition and Examples of Field Structure: $\S 87 \varepsilon$
Retrieved from "https://proofwiki.org/w/index.php?title=Galois_Field/Examples/Order_4&oldid=522232"
Examples of Galois Fields
This page was last modified on 3 June 2021, at 09:24 and is 2,098 bytes
|
CommonCrawl
|
Identification of effective spreaders in contact networks using dynamical influence
Ruaridh A. Clark ORCID: orcid.org/0000-0003-4601-20851 &
Malcolm Macdonald1
Applied Network Science volume 6, Article number: 5 (2021) Cite this article
756 Accesses
Contact networks provide insights on disease spread due to the duration of close proximity interactions. For systems governed by consensus dynamics, network structure is key to optimising the spread of information. For disease spread over contact networks, the structure would be expected to be similarly influential. However, metrics that are essentially agnostic to the network's structure, such as weighted degree (strength) centrality and its variants, perform near-optimally in selecting effective spreaders. These degree-based metrics outperform eigenvector centrality, despite disease spread over a network being a random walk process. This paper improves eigenvector-based spreader selection by introducing the non-linear relationship between contact time and the probability of disease transmission into the assessment of network dynamics. This approximation of disease spread dynamics is achieved by altering the Laplacian matrix, which in turn highlights why nodes with a high degree are such influential disease spreaders. From this approach, a trichotomy emerges on the definition of an effective spreader where, for susceptible-infected simulations, eigenvector-based selections can either optimise the initial rate of infection, the average rate of infection, or produce the fastest time to full infection of the network. Simulated and real-world human contact networks are examined, with insights also drawn on the effective adaptation of ant colony contact networks to reduce pathogen spread and protect the queen ant.
Despite the commonality of spreading process—such as consensus, disease spread, and rumour propagation—De Arruda et al. (2014) notes that the efficacy of centrality measures, in identifying effective spreaders, differs depending on the system dynamics. Given the clear relationship between random walk and spreading processes it is notable that the efficacy of eigenvector-based spreader selection also varies with the system dynamics; Clark et al. (2019) details the efficacy of eigenvector assessment for consensus dynamics, while De Arruda et al. (2014) notes the inferiority of eigenvector centrality for determining the influence of disease spreaders. In this work we introduce a new network representation of disease spread dynamics, to demonstrate that eigenvector-based assessments can be effective across differing spreading processes as long as the network accurately captures the system dynamics.
For disease spread dynamics, degree-based metrics (which include k-shell/k-core strategies) have been repeatedly found to identify a system's effective spreaders as in De Arruda et al. (2014), Kitsak et al. (2010), Da Silva et al. (2012), Zeng and Zhang (2013), Wang et al. (2016), Liu et al. (2016), Salamanos et al. (2017) and Jiang et al. (2019). While, k-shell and s-core [see Eidsaa and Almaas (2013)] strategies make some acknowledgment of the network structure, these methods are largely agnostic to the communities that may exist even if these are highly separated from each other. In consensus dynamics, Clark et al. (2019) note that the most effective strategies for spreading information often relies on distributing spreaders across multiple communities rather than locating them centrally in the network. Liu and Hu (2005) and Stegehuis et al. (2016) found that community structure does influence the spread of disease on networks. Therefore we aim to highlight that, as with consensus dynamics, a system's eigenvectors capture the interplay of dynamics and network structure that is fundamental to determining the effectiveness of disease spreaders.
Network structure, without considering the system dynamics, has been incorporated into the detection of effective spreaders, with Ghalmane et al. (2019) expanding on the concept of modular centrality to improve the performance of common centrality measures. Degree-based metrics have also been developed that only decentralise the spreader location rather than explicitly acknowledge the network's structure, as in Kitsak et al. (2010), Zeng and Zhang (2013), Wang et al. (2016), Jiang et al. (2019) and Yang et al. (2019). These include selecting spreaders from less prominent hubs [see Jiang et al. (2019)], preventing neighbouring spreaders from being selected [see Kitsak et al. (2010) and Wang et al. (2016)], altering the selected spreader's degree to be negative to avoid selecting nodes with overlapping spheres of influence [see Yang et al. (2019)], and acknowledging that selecting a spreader should diminish the importance of the links to an already infected node [see Zeng and Zhang (2013)]. These methods are agnostic to the interplay of topology and network dynamics, which can lead to inaccuracies for certain topologies, as acknowledged by Namtirtha et al. (2020) where a tunable optimisation is developed. In contrast we ensure that every connection informs the spreader selection with the system's eigenvectors providing a holistic assessment of the network, as demonstrated in Clark et al. (2019) and Punzo et al. (2016) for networks with linear consensus dynamics.
The networks considered herein are contact networks, constructed from close proximity contact durations and frequently used to analyse how disease can spread through a group of individuals as in Vanhems et al. (2013), Salathé et al. (2010), Stehlé et al. (2011), Génois et al. (2015) and Génois and Barrat (2018). These networks are representative of human interactions where human-to-human disease spread can occur. Information on the order of interactions is lost by treating the system as a static network, but these networks are still useful for understanding likely pathways for disease progression. However, as discussed, a new approach for representing these contact networks is introduced in this work to capture the non-linear relationship between contact duration and the probability of disease spread. This relationship is often described with an exponential function, as in Kiss et al. (2017) where it is used to simulate disease spread on networks. In this paper, we shall explore how to account for this exponential relationship when using eigenvectors to detect effective spreaders, and as a consequence highlight the important role contact network structure can play in disease spread.
Network definition
A graph is defined as \(G=(V,E)\), where there is a set of \(V\) vertices and \(E\) edges, which are unordered pairs of elements of \(V\) for an undirected graph and ordered pairs in a directed graph. The adjacency matrix, A, is a square \(\text {N} \times \text {N}\) matrix when representing a graph of \(\text {N}\) vertices. This matrix captures the network's connections where \((A)_{ij}=(A)_{ji}>0\) if there exists an edge connecting vertex i and j and 0 otherwise. This paper only concerns the detection of effective spreaders, therefore all contact networks are scaled to be within the same range, \(0\le (A)_{ij}\le 1 ~\forall \, i,j \in V\). Variable edge weights contain information on the relative strength of interactions, which for contact networks is defined by contact duration. The adjacency matrices throughout this paper are symmetric, but it will be important to note that the nonzero row entries of A indicate outgoing links from which a node can contract disease. Whereas the nonzero column entries are incoming links that a node can use to spread disease to its contacts.
Communities of dynamical influence
The communities of dynamical influence (CDI), introduced by Clark et al. (2019), are used to provide evidence of the prominent role network structure can play in determining a node's influence as a spreader of disease. This community detection attempts to capture the dynamical influence of nodes, where Klemm et al. (2012) defines dynamical influence as the influence a node has over the dynamic state of other nodes in the network. CDI are defined by global and local dynamical influence, with these communities identified by embedding the network in a Euclidean space defined by the real part of the system's three most dominant eigenvectors. Community leaders are identified as nodes that are further from the origin of this coordinate frame than their neighbours. The nodes belonging to each leader's community must have a directed path connecting them to the leader. If multiple leaders are viable then assignment is based on alignment to a leader, assessed using the scalar projection of each node's position vector in Euclidean space with respect to the viable leaders. In terms of dynamical influence, with respect to the whole network (i.e. global), this approach produces the same results as eigenvector centrality. However, Clark et al. (2019) show that more localised influence can also be identified by incorporating the 2nd and 3rd dominant eigenvectors, with community leaders not necessarily prominent according to eigenvector centrality.
Susceptible-infected (SI) simulations
The main focus of this paper is the effectiveness of disease spreaders on contact networks. The performance of spreaders is analysed using susceptible-infected (SI) simulations, see Miller and Ting (2020), where edge weight affects the probability of transmission, with the time to infection exponentially distributed as described by Kiss et al. (2017). This paper is primarily concerned with the influence of a node on the spread of disease, therefore susceptible-infected-susceptible (SIS) and susceptible-infected-recovered (SIR) models are not considered to reduce the effect of stochastic resusceptibility and recovery events. SI simulations guarantee that all nodes with a directed path to an infected node will eventually become infected, therefore the effectiveness of disease spreaders are monitored by tracking the time taken for 25%, 50%, 75%, and 100% of the network to become infected.
Exponentially Distributed Contact Networks
For simulating the spread of disease through a network, exponentially distributed times to infection [as in Miller and Ting (2020)] are commonly assumed but contact network analysis frequently uses contact times to weight the adjacency matrix, such as in Salathé et al. (2010). In this work, the adjacency matrix is altered to reflect the exponential relationship between contact time and risk of disease spread, with the aim of improving the identification of effective spreaders of disease. The first step is to apply a commonly used exponential function for converting contact time into a weight that more accurately represents the increased risk of transmission due to the passage of time,
$$\begin{aligned} P = 1-exp(-\tau t) \end{aligned}$$
where t is a contact time between two nodes in the network and \(\tau\) is the transmission rate, see Kiss et al. (2017). By scaling each contact time according to Eq. 1, an exponentially adjusted adjacency matrix, \(A_e\) is produced where
$$\begin{aligned} (A_e)_{ij} := {\left\{ \begin{array}{ll} 1-exp(-\tau (A)_{ij}), &{} \hbox {if}\ (A)_{ij}>0.\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
Therefore, \(A_e\) is more representative of the probability that an infection will travel over an edge than A.
When multiple infected nodes are connected to a susceptible node, the system dynamics are no longer accurately captured by the adjusted adjacency. For example, when two infected nodes are in contact with the same susceptible node the risk of transmission is less than the sum of the two edge weights (even when using \(A_e\)). Instead, the risk of transmission is more accurately captured by applying Eq. 1 when t is equal to the sum of the contact times with both of the infected nodes. Therefore, an exponentially adjusted Laplacian matrix, \(L_e\), is also proposed where the diagonal elements are equal to P in Eq. 1 when \(t=\sum _j (A)_{ij}\), i.e. the sum of all contact times for a given node. The off-diagonal element are composed of the, negated, adjusted adjacency matrix values, i.e.
$$\begin{aligned} L_e = D_{e_{\Sigma t}} - A_e \end{aligned}$$
where \((D_{e_{\Sigma t}})_{ii}=diag(1-exp(-\sum _m (A)_{im}))\). Hence,
$$\begin{aligned} (L_e)_{ij} := {\left\{ \begin{array}{ll} -(1-exp(-\tau (A)_{ij})), &{} \hbox {if}\ (A)_{ij}>0.\\ 1-exp(-\sum _m (A)_{im}), &{} \text {if i = j} \\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
Understanding influence in disease spread
The assessment of influence in directed networks, following linear consensus, can be captured by the first left eigenvector \(\text {v}_1\) as described by Clark et al. (2019). The ratio of indegree to outdegree affects \(\text {v}_1\), where nodes with a high indegree and low outdegree wield more influence since their state has a greater impact on their neighbours than their neighbours' states have on them, see Clark et al. (2019). This is not usually relevant for undirected networks but, as discussed, the exponentially adjusted Laplacian \(L_e\) creates an imbalance between incoming connections (along which it can spread disease) and the diagonal entry—that usually is the sum of outgoing connections—but now equals the exponentially adjusted sum of contact times. The ratio between incoming edge weights and the magnitude of a node's diagonal entry is greatest for nodes with a high degree. This results in high degree nodes being rewarded with an amplification of their influence, as determined by \(\text {v}_1\), when compared with a linear system. Influence in this context means that a node is effective at spreading the disease to others, while not being easily infected itself by a more influential node. Both aspects, the ability to spread and the difficulty to infect, will be important in the following sections where we will describe how to optimise \(L_e\) to select effective spreaders.
Disease spreader selection
A challenge in this work is translating approaches from linear consensus to disease spread. Finite resources were allocated to nodes in Clark et al. (2019) to drive any network to rapid consensus. Resource were allocated with an optimisation based on CDI's community designations and the first left eigenvector (\(\text {v}_1\)). Such an approach needs to be adapted when considering disease spread; nodes have susceptible and infected states that are binary unlike the variable resource allocations and smooth transitions seen in consensus models. The first left eigenvector \(\text {v}_1\) is still the basis of spreader selection presented herein, but to select any number of disease spreaders an iterative process of spreader selection and network alteration is introduced. For an undirected Laplacian matrix L (\(L=D-A\) where D is a diagonal matrix of degree), \(\text {v}_1\) (where \(\lambda _1=0\)) is a uniform vector. The exponentially adjusted Laplacian \(L_e\) is still undirected but the imbalance between incoming weight sum and the magnitude of the diagonal entry means that \(\text {v}_1\) of \(L_e\) is no longer uniform and can be used to select influential spreaders. Also \(\lambda _1 \ne 0\), but \(\lambda _1\) is still the smallest eigenvalue of \(L_e\).
Spreader selection is an iterative process, where the node associated with the largest entry of \(|v_1|\) is the first selected. If more spreaders are to be selected, then the i-th selected node \(\gamma [i]\) has its incoming and outgoing connections removed, i.e. \((L_e)_{\gamma [i]j}=(L_e)_{j\gamma [i]}=0\) \(\forall \,j \ne \gamma [i] \in {\mathcal {V}}\), and \((L_e)_{\gamma [i]\gamma [i]}=\sum _j (L_e)_{jj}\). A new \(\text {v}_1\) is calculated for this updated \(L_e\) matrix and the process repeats until all spreaders are selected. By setting \((L_e)_{\gamma [i]\gamma [i]}=\sum _j (L_e)_{jj}\), this ensures that the smallest eigenvalue of \(L_e\) is not associated with an already chosen spreader node but instead concerns the giant component of the graph. The spreader selection algorithm is detailed in pseudo-code in Algorithm 1.
Optimising spreader selection
The exponentially adjusted Laplacian \(L_e\) is only an approximation of disease spread dynamics. This section details how understanding the implications of that approximation enables different types of effective spreader to be identified. Spreaders can provide a fast initial rate of infection, a fast time to full infection of the network, or a compromise between those two that provides a more consistent rate of infection.
Spectral analysis, in the form of the first left eigenvector, uses \(L_e\) to compare a node's ability to spread infection, when all its neighbours are susceptible, with the ability of its neighbours, if they were all infected, to spread disease to it. This is obviously a comparison of two extremes, which as discussed previously provides greater reward to nodes with a high degree. The matrix \(L_e\) represents the maximum reward, in terms of influence, that could be expected from the non-linear relationship between contact duration and the probability of disease spread. By scaling the contact durations so that they are reduced in the adjacency matrix A, before creating \(L_e\), the ratio of incoming connection weights (ability to spread disease) to the magnitude of the diagonal entry of \(L_e\) (ability to catch disease) is reduced. Hence, the reward in terms of influence for high degree nodes can also be reduced through scaling the adjacency matrix.
Scaling the adjacency matrix is achieved by adjusting \(\text {s}\), where the exponentially adjusted adjacency becomes
$$\begin{aligned} (A_e)_{ij} := {\left\{ \begin{array}{ll} 1-exp(- {\tau }{s} (A)_{ij}), &{} \hbox {if}\ (A)_{ij}>0.\\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
The exponentially adjusted Laplacian becomes
$$\begin{aligned} (L_e)_{ij} := {\left\{ \begin{array}{ll} -(1-exp(-{\tau }{s}(A)_{ij})), &{} \hbox {if}\ (A)_{ij}>0.\\ 1-exp(-\sum _m {\tau }{s}(A)_{im}), &{} \text {if i = j} \\ 0, &{} \text {otherwise}. \end{array}\right. } \end{aligned}$$
It is anticipated that s values close to 1, which reward high degree nodes, will be most effective for detecting spreader selections that maximise the initial rate of spread. Conversely, low s values are expected to result in selections that identify nodes belonging to more isolated communities in the network and hence provide better results when looking to infect the whole network. However, the best value of s varies depending on the network so an optimisation is introduced where a set of discrete values between 1 and 0.001, \(S=\{0.001,0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9,1\}\), are tested where \(s \in S\). For each value of s, a spreader selection is generated and all unique selections tested with SI simulations described in Miller and Ting (2020). While the set of S provides effective selections for most of the topologies tested, the results shall highlight certain artificial topologies for which a smaller range of values, such as \(S=\{0.001,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1\}\), produces superior selections.
Initial—maximising initial spread
The Initial selection aims to maximise the initial rate of infection spread, by comparing the time to 25% infected for each of the candidate spreader selections. The chosen selection produces the minimum median time from 10 SI simulations.
Averaged—consistent performance
Times are recorded at 25%, 50%, 75%, and 100% intervals of network infection from 10 SI simulations. For a given candidate spreader selection, the Averaged selection records the median times produced at each percentage interval. The median times are then divided by the mean of these medians, producing a normalised median time at each percentage interval, with the sum of these normalised medians assessed. The selection that minimises the sum of normalised median times is selected.
End—fast time to full infection
The End selection minimises the time to 100% infected, by comparing each of the candidate spreader selections and selecting the option that produces the minimum mean time from 10 SI simulations.
Weighted degree (no neighbours)
The benchmark for performance presented in the paper is weighted degree (no neighbours), which is shown by Kitsak et al. (2010) to be highly effective at spreader selection. This method combines weighted degree-based selection with the restriction that neighbours of spreaders already selected are not viable for inclusion, referred to here as weighted degree (no neighbours). This approach is an acknowledgment that weighted degree centrality, and also k-shell strategy [see Kitsak et al. (2010)], are largely ignorant to the network structure and susceptible to error when weakly connected communities are present. However, the effectiveness of the no neighbour restriction only confirms that a node becomes less influential, in terms of disease spread, when a neighbour becomes infected. This reduction in influence could be expected as an infected node has no influence over a node that is already infected.
Other benchmarks were used—including eigenvector centrality, s-core strategy (a weighted degree version of k-core/k-shell), and betweenness centrality—and the results from these approaches are included in Additional file 1: Fig. S1 and S2.
Visualising spreader selection
Algorithm 1 is shown in operation in Fig. 1 for \(s=1\) (in Fig. 1a, b, c) and \(s=0.1\) (in Fig. 1d, e, f), where it is applied to a hospital ward contact network from Vanhems et al. (2013) to identify three spreaders. Fig. 1 presents a Euclidean space defined by the first two left eigenvectors of the exponentially adjusted Laplacian matrix (\(\text {v}_1\) and \(\text {v}_2\)). In Fig. 1a the most prominent node according to \(\text {v}_{1}\) is the first chosen spreader from the network defined using \(s=1\). The communities of dynamical influence (CDI), described in the Communities of dynamical influence section, divides the network into communities with their influence dependent on the largest \(\text {v}_{1}\) entry. Fig. 1b details how removal of this chosen spreader, from the influential nurse-dominated community, results in other nodes in that community losing influence (i.e. a smaller \(\text {v}_1\) value) as the number of nodes and pathways for spreading disease in that community has decreased. When a second node from the same community is removed, Fig. 1c details further loss of influence for the nurse community with the chosen spreader now located in a doctor-dominated community.
Spreader selection for hospital ward contact network (\(\tau = 1\)). CDI determined communities, defined for a are highlighted in b–f to visualise the iterative process of spreader selection when \(s=1\) (a–c) and \(s=0.1\) (d–f). In a, d the original network is embedded in a Euclidean space defined by the dominant eigenvectors (\(\text {v}_{1}\) and \(\text {v}_{2}\)) of \(L_e\). The node with the largest \(\text {v}_{1}\) value is the chosen spreader. In b, e, \(L_e\) is updated by removing connections from the chosen spreader. In c, f \(L_e\) is updated again by removing connections from the chosen spreader in b, e respectively. Markers denote hospital role and dot colour denotes community where community influence is ranked in a according to largest \(\text {v}_{1}\) entry in each community. Node size is mostly proportional to weighted degree, with a minimum size limit used to aid visibility
When \(s=0.1\) the process of spreader selection follows the same pattern, but as can be seen already in Fig. 1d the least influential community from Fig. 1a is given more prominence. In fact, after removing two chosen spreaders, the most prominent node in Fig. 1f is a node from this least influential community (where communities are kept the same as defined for the network in Fig. 1a). Importantly, and unlike degree based metrics, after selecting a spreader the next spreader is chosen based on a holistic assessment of node influence, i.e. an updated \(\text {v}_1\) for the updated network topology after chosen spreader removal. In this way, every spreader selection in this eigenvector-based selection is dependent on and responsive to network structure.
When \(s=1\), two nurses are selected in Fig. 1a, b from the same community with the high weighted degree nodes ensuring rapid initial spread of disease. In Fig. 1c, a doctor is selected as the next most effective spreader as there is a clear separation between the doctor and nurse dominated communities. In contrast, when \(s=0.1\), for Fig. 1e–f the system dynamics give more prominence to isolated nodes and communities that are difficult to infect. In Fig. 1d the chosen spreader is the same prominent nurse as selected in Fig. 1a. However, in Fig. 1e a doctor from the less influential doctor community is chosen, rather than another nurse. Finally, in Fig. 1f an isolated member of admin staff is selected since they are hard to infect due to their limited contact with others in the network.
Community spread
The eigenvector-based selection, visualised in Fig. 1, selects spreaders from different communities. Fig. 2 reveals why this is an effective strategy, as the initial spread of disease from prominent community nodes is primarily within their own community. The choice of \(\tau = 1\) is not intended to be representative of a particular disease, therefore time is reported without units and used only for comparison throughout this section. In Fig. 2a, disease is spread from the most prominent nurse in the nurse-dominated community, highlighted in Fig. 1a. The smallest mean times to infection are within the nurse community with a couple of the most prominent doctors also receiving early infection. In Fig. 2b the infection is spread from the most prominent doctor, which was selected in Fig. 1b as the chosen spreader. Again the prominent nodes within this doctor's community are amongst the earliest infected, with a selection of the most prominent nurses also being consistently infected within this initial time period. Since the initial disease spread is primarily contained within communities, distributing spreaders across these eigenvector-based communities can be an effective tactic, when looking to rapidly infect a network, as will be demonstrated in the following sections.
Simulated initial spread of disease from key nodes in a hospital ward contact network. A black outline defines the initially infected node in a, b. The mean time to infection, from 100 SI simulations (\(\tau = 1\)), is differentiated by colour for values below 3. Connections between nodes are also displayed
Infection spread—real-world networks
The ability to identify the most effective spreaders of disease is of most relevance when applied to real-world contact networks, such as those generated from data gathered using proximity sensors. In this section, the performance of spreader selection is evaluated in Fig. 3 for seven real-world contact networks. These contact networks include a hospital ward (Fig. 3a; \(\text {N}=75\)), a primary school (b, c \(\text {N}=235\) and \(\text {N}=238\)), a high school (d \(\text {N}=788\)), a workplace (e, f; \(\text {N}=92\) and \(\text {N}=217\)), and a conference (g; \(\text {N}=403\)).
Results from SI simulations (\(\tau = 1\)) of real-world contact networks using four selected spreaders. The contact networks are, a a hospital ward (Vanhems et al. 2013), b, c, a primary school (Stehlé et al. 2011), d a high school (Salathé et al. 2010), e, f a workplace (Génois et al. 2015; Génois and Barrat 2018) and, g a conference (Génois and Barrat 2018). Times are recorded from 100 SI simulations when 25%, 50%, 75%, and 100% of network are infected and then normalised with the mean of all times recorded at each percentage. From left to right; times are detailed for the weighted degree (no neighbours) selection, then the eigenvector-based selections for initial rate (Initial), average performance (Averaged), and time to all infected (End). The y-axes of a, c, f, g are cropped to exclude extreme values
The results in Fig. 3 highlight how the eigenvector-based selections can achieve three different measures of effectiveness, by varying the s value when creating the exponentially adjusted Laplacian matrix that represents the system. Thereby achieving fast times to 25% infection with Initial, consistently good performance with Averaged, or fast times to 100% infected with End. The s values selected are detailed above each plot in Fig. 3 for these three selections where \({s=[Initial, Averaged, End]}\).
The performance of weighted degree (no neighbours) appears to be most similar to the Initial selection, where there is only a clear difference in the time to 25% for Fig. 3a, f with Initial producing a faster infection rate. The End selection almost always produces the fastest time to 100% infection, but this can be seen to come at the cost of performance for all prior percentage intervals. The End selection also demonstrates that isolated nodes and communities appear to be common in human contact networks, as there is a very significant benefit to using this selection for the 100% infected times in Fig. 3a, c, d, e, f, g. The Averaged selection consistently positions itself between the times recorded for the Initial and End selections. In Fig. 3a, e, g this ensures a good performance at 100%, but without the large sacrifice in performance at lower percentage intervals.
These trends are not specific to the numbers of spreaders in the system, where results are detailed in the Additional file 1: Fig. S3 for various numbers of selected spreaders. These results also include comparisons with eigenvector centrality, betweenness centrality, and s-core (no neighbours). S-core (no neighbours) performs similarly to the weighted degree (no neighbours) metric, while eigenvector and betweenness centrality are frequently the least accurate metrics for selecting effective disease spreaders. Finally it is worth noting that for these examples, and those that follow, that varying \(\tau\) does not result in changes to the relative performance of these methods and is not a focus of this paper for that reason.
Infection spread—artificial networks
It is useful to investigate the consistency of spreader selection performance on networks with controllable topology. Proximity Graphs are used here to demonstrate that the performance of both the eigenvector-based and weighted degree (no neighbours) selections can vary with the topology. The definition of the proximity graph used here, referred to as proximity-nearest neighbour (P-NNR), is as follows: Distribute 100 points—representing network nodes—in a Euclidean plane with a uniform random distribution for the x and y coordinates between 0 and 1. Define a proximity threshold (d), and any two points separated by a Euclidean distance that is less than d are connected. The weights of all connections are then defined using a uniform random distribution between 0 and 1.
In Fig. 4, 100 P-NNR graphs are investigated, with four different d values defining the topology construction, where each graph undergoes 100 SI simulations. While the eigenvector-based selections appear to perform similarly to the examples in Fig. 3, the \(\text {d}=0.2\) and \(\text {d}=0.3\) cases (Fig. 3a, b) require an alternative selection of s values. These results are achieved by employing a smaller range for the set S of possible values, namely \(S=\{0.001,0.01,0.02,0.03,0.04,0.05,0.06,0.07,0.08,0.09,0.1\}\). The nominal range was defined between 0.001 and 1, but this produces a notably poorer performance in the \(\text {d}=0.2\) and 0.3 cases for the Initial and Averaged selections. P-NNR topologies can produce strongly connected hubs, with the requirement for smaller s values likely necessary to reduce the influence of the most prominent community after the first chosen spreader's removal and therefore enabling the selection of spreaders from less influential communities.
Results from SI simulations (\(\tau = 1\)) of artificial contact networks using four selected spreaders. The median times from 100 artificially generated proximity networks are reported, where each median was assessed from 100 SI simulations. The proximity networks (P-NNR) are defined by the Euclidean distance threshold (d). In a d = 0.2, a d = 0.3, a d = 0.4, and a d = 0.5 where connection weights are given a value between 0 and 1 according to a uniform random distribution. Times are recorded when 25%, 50%, 75% and 100% of network are infected then normalised using division by the mean of all times recorded at each percentage. From left to right; times are detailed for the weighted degree (no neighbours) selection, then the optimised selections for initial rate (Initial), average performance (Averaged), and time to all infected (End). The y-axes of a, b are cropped to exclude extreme values
The weighted degree (no neighbours) selection also has a variable performance dependent on the d value. In particular, for \(\text {d}\ge 0.4\) the weighted degree selection no longer produces a fast initial spread. This appears to be due to the no neighbours constraint, with larger d values creating more neighbours for each node. Therefore, there are fewer nodes that are not connected to a chosen spreader and hence available for selection. In fact, for \(\text {d}=0.5\) the s-core and weighted degree selections are allowed to select any available nodes when there are no nodes remaining that are not connected to a chosen spreader.
Ant pathogen response
The previous results emphasise the importance of community structure to disease spread in contact networks. By understanding the role of communities in effectively spreading disease, it is also possible to highlight effective network structures for preventing spread. In this section, the adaptation of ant colony contact networks in the presence of pathogen spreaders shall be presented, using the network eigenvectors, in a similar manner to Figs. 1 and 2.
The experiment reported in Stroeymeyt et al. (2018) exposed a number of ants to a pathogen, monitored all ant contacts for a period of 9 days and recorded which ants died during that period. The separation of the queen ant from the pathogen carriers is noted by Stroeymeyt et al. (2018), and can also be seen clearly by looking at Fig. 5 where the network is embedded in a Euclidean space defined by the system eigenvectors with the communities of dynamical influence (CDI), described in the Communities of dynamical influence section, also detailed. Six of the eleven monitored colonies are presented in the figure. From these it can be seen that infected ants can still maintain relatively long durations of contact, as denoted by the size of each node's marker that is proportional to weighted degree. There are examples of infected ants with minimal contacts during the experiment, but a more consistent sign of colony adaptation to the presence of pathogen is seen in the network structure. The communities are defined by CDI using the first three eigenvectors of the adjacency matrix, with Fig. 5 presenting two of these three dimensions. In Fig. 5, the queen's community is either free from pathogen carriers or pathogen carriers are at the origin of the eigenvector defined Euclidean space, which indicates that they have very limited contact with other ants. In many of the examples, no ants in the queen's community die during this survival experiment, indicating that they are unlikely to have contracted significant quantities of the pathogen. Furthermore, the relative influence of communities are indicated in Fig. 5, based on the largest first eigenvector entry of the adjacency matrix (eigenvector centrality) from each community. Pathogen carrying ants are commonly located in the least influential communities, as seen for 8 out of the 11 colonies investigated, where they have less ability to infect the rest of the network. Whereas pathogen carriers were only present in the most influential community in 2 out of the 11 colonies. In both of these cases, the pathogen carriers were closer to the origin of the Euclidean frame than the majority of their community's members, indicating their lack of influence in this most influential community. All 11 ant colonies are presented in the Additional file 1, including both the \(\text {v}_{A1}\) and \(\text {v}_{A2}\) (Fig. S4) and \(\text {v}_{A2}\) and \(\text {v}_{A3}\) (Fig. S5) perspectives.
Community structure from ant colony contact network after pathogen introduction. CDI determined communities are detailed for contact networks of ant colonies infected with a pathogen during a survival experiment [see Stroeymeyt et al. 2018]. Node size is proportional to weighted degree, and colours denote community designation with each community's influence ranked according to the magnitude of the largest entry of the first eigenvector (\(\text {v}_{A1}\)). There are markers indicating the queen ant, ants carrying pathogen at the start of the experiment and ants that died during the experiment. \(\text {v}_{A1}\) and \(\text {v}_{A2}\) are the two dominant eigenvectors of the adjacency matrix composed of pairwise contact durations
For the survival experiments in Stroeymeyt et al. (2018), the pathogen was given exclusively to forager ants which are those most likely to pick it up when venturing outside of the nest. It is, therefore, interesting to note that this separation of forager communities from the queen is detected even in colonies that were monitored when none were infected and provides an example of effective network topology for mitigating epidemic spread. As forager isolation after infection, noted by Stroeymeyt et al. (2018), can be achieved without significant reorganisation of the colony's contact network.
Finally, Fig. 5 presents the eigenvectors of the adjacency composed of contact duration, rather than the exponentially adjusted Laplacian, as the probability of pathogen spread in ants differs from that of human disease spread. For ants carrying pathogens, Stroeymeyt et al. (2018) notes that the probability also depends on the quantity of pathogen spores an ant is carrying.
The manipulation of the Laplacian matrix to better represent the dynamics of disease spread highlights a potential pitfall for centrality measures where the best performing measure can be chosen without a clear justification for why it should be effective [see Brandes (2020) for further discussion on the appropriate use of centrality measures]. In this case, it is logical that weighted degree should perform well when identifying effective spreaders, but given the clear relationship between disease spread and random walk assessments it is less reasonable that it should significantly and consistently outperform an eigenvector-based assessment. This paper has demonstrated that there is an issue with analysing the contact network composed of contact durations, when what is of primary concern is the probability of disease transmission. We have attempted to provide a better representation of a disease transmission network, by introducing the exponentially adjusted Laplacian. In doing so, we have highlighted why weighted degree selections perform well in the specific application of disease spread; this is not a new observation as De Arruda et al. (2014) notes that selection metrics differ depending on the underlying dynamics. But here we go further by illuminating why these dynamics essentially alter the balance between the probabilities of spreading and receiving disease, which amplify the influence of high degree nodes.
The three spreader selections Initial, Averaged, and End take advantage of understanding the trade-off between the most influential nodes in terms of immediate neighbours (those with multiple, high weight, connections) and the nodes that are only locally influential but are part of isolated communities. An optimisation is required to make these selections, but the results show that reducing the magnitude of weights in the adjacency matrix, before creating the exponentially adjusted Laplacian, changes selections from Initial, to Averaged, and then finally produces an End selection. This occurs because the scaling affects the ratio between incoming connection weights (relative probabilities of spreading infection) and the magnitude of the diagonal matrix entry (the relative probability of receiving infection).
The Initial and Averaged selections are of obvious interest to disease spread, both in terms of targets for testing—where effective spreaders would be the most damaging to go unnoticed - and potentially as candidates for vaccination. However, testing and vaccination optimisation are notably different problems to the detection of effective spreaders, so for example there is no guarantee that the most effective spreaders are the most effective to vaccinate. Also the spread of disease on susceptible-infected-susceptible (SIS) simulations has been shown by Nadini et al. (2018) to differ from the susceptible-infected-recovered (SIR) models of infection spread, so these results may not translate exactly to real-world implementation.
End selections may be of limited interest in preventing disease spread in humans, where the focus usually centres on preventing rapid initial spread of infection. But instead there may be applications in pursuits such as the sterilisation of mosquitoes through the intentional spread of genes as described in Callaway (2015), where the goal is to spread engineered genes throughout the entire populace.
The link between eigenvector-based spreader selection—that incorporates connection removal—and the communities defined by CDI is explored in the hospital ward contact network (Fig. 1), where weighted degree and also eigenvector centrality would fail to recognise the importance of targeting the less influential community comprised of doctors. This raises the question of why community detection is not part of Algorithm 1 for identifying spreaders, but as can be seen in Fig. 1 and as demonstrated by Clark et al. (2016) the removal of node connections is a method for revealing network communities. In the context of disease spread, it also provides a more accurate representation of the system after node infection, given that infected nodes cannot be influenced (infected) by others.
Another observation from the hospital ward network (Fig. 1) is its division into communities dominated by influential nurses and doctors, with patients presenting as far less effective spreaders. Focused testing in health care facilities was a recommendation from the World Health Organisation for combating the COVID-19 epidemic [see World Health Organization and others (2020)]. Given the insights found on a hospital ward network of 75 people, a similar application could be envisioned for key hospital wards where proximity sensors are deployed and the insights of network analysis used to inform testing and interventions in response to an infection outbreak.
The majority of this paper concerns the detection of effective spreaders using the system's eigenvectors, when accounting for the exponential relationship between contact duration and risk of transmission. For the ant colony examples, the relationship between contact duration and risk of transmission depends on the quantity of pathogen spores, see Stroeymeyt et al. (2018). Modelling this ant pathogen transmission risk is beyond the scope of the paper, therefore the eigenvectors are assessed using an adjacency matrix of contact durations. This emphasises that contact duration analysis can still provide insights into the spreading dynamics of a system and, in this case, the effective adaptation of ant colony structure. However, as has been shown throughout this paper, the analysis of the ant colonies using a matrix representation that captures the risk of transmission, and not just contact duration, would be more accurate and possibly more insightful.
Network structure influences the location of the most effective spreaders of disease; evidenced by ant colony networks and the effectiveness of eigenvector-based identification of effective spreaders, on both simulated and real-world contact networks. The system eigenvectors capture the influence of disease spreaders, when the network dynamics are accurately represented. We show that disease spread dynamics can be approximated by adjusting the construction of the Laplacian matrix to capture the non-linear relationship between contact time and the probability of disease spread. By representing the dynamics in this way, the success of degree based metrics—in identifying effective spreaders of disease—is shown to be due, in part, to the non-linear dynamics rewarding high degree nodes with greater influence. The concept of an effective spreader is shown to be ill-defined, where—for the newly introduced exponentially adjusted Laplacian—altering the ratio of incoming connection weight versus the magnitude of the diagonal entry creates a trichotomy on the concept of effective spreading. A spreader selection can be identified that can outperform the benchmark metric of weighted degree (applying a no neighbour constraint) in terms of the initial rate of infection, the average rate of infection, or the time to full infection of the network.
The ant survival experiment datasets analysed during the current study are available in the Zenodo repository, https://doi.org/10.5281/zenodo.1322669. The contact network datasets analysed during the current study are available in the SocioPatterns repository, http://www.sociopatterns.org/datasets/.
CDI:
P-NNR:
Proximity nearest neighbour
SI:
Susceptible-infected
SIS:
Susceptible-infected-susceptible
SIR:
Susceptible-infected-recovered
Brandes U (2020) Central positions in social networks. In: International Computer Science Symposium in Russia, pp. 30–45. Springer
Callaway E (2015) Mosquitoes engineered to pass down genes that would wipe out their species. Nature News
Clark R, Punzo G, Macdonald M (2016) Consensus speed optimisation with finite leadership perturbation in k-nearest neighbour networks. In: 2016 IEEE 55th conference on decision and control (cDC). IEEE, pp 879–884
Clark R, Punzo G, Macdonald M (2019) Network communities of dynamical influence. Sci. Rep. 9(1):1–13
Da Silva RAP, Viana MP, da Fontoura Costa L (2012) Predicting epidemic outbreak from individual features of the spreaders. J Stat Mech Theory Exp 2012(07):07005
De Arruda GF, Barbieri AL, Rodríguez PM, Rodrigues FA, Moreno Y, da Fontoura Costa L (2014) Role of centrality for the identification of influential spreaders in complex networks. Phys Rev E 90(3):032812
Eidsaa M, Almaas E (2013) S-core network decomposition: a generalization of k-core analysis to weighted networks. Phys Rev E 88(6):062819
Génois M, Barrat A (2018) Can co-location be used as a proxy for face-to-face contacts? EPJ Data Sci 7(1):11. https://doi.org/10.1140/epjds/s13688-018-0140-1
Génois M, Vestergaard CL, Fournet J, Panisson A, Bonmarin I, Barrat A (2015) Data on face-to-face contacts in an office building suggest a low-cost vaccination strategy based on community linkers. Netw Sci 3:326–347. https://doi.org/10.1017/nws.2015.10
Ghalmane Z, Cherifi C, Cherifi H, El Hassouni M (2019) Centrality in complex networks with overlapping community structure. Sci Rep 9(1):1–29
Jiang L, Zhao X, Ge B, Xiao W, Ruan Y (2019) An efficient algorithm for mining a set of influential spreaders in complex networks. Physica A Stat Mech Appl 516:58–65
Kiss IZ, Miller JC, Simon PL et al (2017) Mathematics of epidemics on networks. Springer, Cham
Kitsak M, Gallos LK, Havlin S, Liljeros F, Muchnik L, Stanley HE, Makse HA (2010) Identification of influential spreaders in complex networks. Nat Phys 6(11):888–893
Klemm K, Serrano MÁ, Eguíluz VM, San Miguel M (2012) A measure of individual role in collective dynamics. Sci Rep 2:292
Liu Y, Wei B, Du Y, Xiao F, Deng Y (2016) Identifying influential spreaders by weight degree centrality in complex networks. Chaos Solitons Fractals 86:1–7
MathSciNet Article Google Scholar
Liu Z, Hu B (2005) Epidemic spreading in community networks. Europhys Lett 72(2):315
Miller JC, Ting T (2020) Eon (epidemics on networks): a fast, flexible python package for simulation, analytic approximation, and analysis of epidemics on networks. arXiv preprint arXiv:2001.02436
Nadini M, Sun K, Ubaldi E, Starnini M, Rizzo A, Perra N (2018) Epidemic spreading in modular time-varying networks. Sci Rep 8(1):1–11
Namtirtha A, Dutta A, Dutta B (2020) Weighted kshell degree neighborhood: A new method for identifying the influential spreaders from a variety of complex network connectivity structures. Expert Syst Appl 139:112859
Punzo G, Young GF, Macdonald M, Leonard NE (2016) Using network dynamical influence to drive consensus. Sci Rep 6:26318
Salamanos N, Voudigari E, Yannakoudakis EJ (2017) A graph exploration method for identifying influential spreaders in complex networks. Appl Netw Sci 2(1):26
Salathé M, Kazandjieva M, Lee JW, Levis P, Feldman MW, Jones JH (2010) A high-resolution human contact network for infectious disease transmission. Proc Natl Acad Sci 107(51):22020–22025
Stegehuis C, Van Der Hofstad R, Van Leeuwaarden JS (2016) Epidemic spreading on complex networks with community structures. Sci Rep 6(1):1–7
Stehlé J, Voirin N, Barrat A, Cattuto C, Isella L, Pinton J-F, Quaggiotto M, Van den Broeck W, Régis C, Lina B et al (2011) High-resolution measurements of face-to-face contact patterns in a primary school. PLoS One 6(8):23176
Stroeymeyt N, Grasse AV, Crespi A, Mersch DP, Cremer S, Keller L (2018) Social network plasticity decreases disease transmission in a eusocial insect. Science 362(6417):941–945
Vanhems P, Barrat A, Cattuto C, Pinton J-F, Khanafer N, Régis C, Kim B-A, Comte B, Voirin N (2013) Estimating potential infection transmission routes in hospital wards using wearable proximity sensors. PLoS one 8(9):73970
Wang X, Su Y, Zhao C, Yi D (2016) Effective identification of multiple influential spreaders by degreepunishment. Physica A Stat Mech Appl 461:238–247
World Health Organization and others (2020) Laboratory testing strategy recommendations for covid-19: interim guidance. 22 March 2020. Technical report. World Health Organization
Yang G, Benko TP, Cavaliere M, Huang J, Perc M (2019) Identification of influential invaders in evolutionary populations. Sci Rep 9(1):1–12
Zeng A, Zhang C-J (2013) Ranking spreaders by decomposing complex networks. Phys Lett A 377(14):1031–1035
The authors received no financial support for the research, authorship, and/or publication of this article.
Department of Electronic and Electrical Engineering, University of Strathclyde, George Street, Glasgow, UK
Ruaridh A. Clark & Malcolm Macdonald
Ruaridh A. Clark
Malcolm Macdonald
RC was involved in the conceptualisation, analysis, methodology, visualisation, and drafting of the manuscript. MM was involved in the conceptualisation, drafting and revision of the manuscript. All authors read and approved the final manuscript.
Correspondence to Ruaridh A. Clark.
. Provides additional results, including performance comparison of other benchmark spreader selections (Fig. S1 and S2), the effect of varying the number of spreaders selected (Fig. S3), and results for all 11 ant colonies from the survival experiment (Fig. S4 and S5).
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
Clark, R.A., Macdonald, M. Identification of effective spreaders in contact networks using dynamical influence. Appl Netw Sci 6, 5 (2021). https://doi.org/10.1007/s41109-021-00351-0
Accepted: 02 January 2021
Disease spread
Dynamical influence
Special issue on Epidemics Dynamics & Control on Networks
|
CommonCrawl
|
BlogHome » NEWS » Uncategorized » oxidation state of chlorine in hypochlorous acid
oxidation state of chlorine in hypochlorous acid
the formal charges are closest to 0 (and also the second structure does not give a complete octet on N). \(\ce{SO4^2-}\) Or, x= +1. Chlorine is a Group VIIA halogen and usually has an oxidation state of -1 . For example, the first step in making hypochlorous acid is the electrolysis of a salt water brine to produce hydrogen and chlorine, the products are gaseous. Meaning: chlorine/hypochlorous acid and monochloramine are "sister molecules". 96 . The mechanism of hypochlorite oxidation of azo dyes is one of electrophilic attack of HClO at the reductant, as it is also for peroxomonosulfate or peroxocarboxylic acid oxidation of such dyes. In this study, we used mediated electrochemical oxidation to quantify changes in the electron donating capacities (EDCs), and hence the redox states, of three different types of DOM during oxidation with chlorine dioxide (ClO 2), chlorine (as HOCl/OCl –), and ozone (O 3). Which of the following will not be oxidized by ozone ? The reaction between hypochlorous acid and chlorite ions is the rate limiting step for in situ chlorine dioxide regeneration. Oxidation is loss of electrons. Download for free at http://cnx.org/contents/[email protected]). So the oxidation number Of Cl … It is non-toxic, non-irritant and non-corrosive at proper usage concentrations. Chlorine attains a +1 oxidation state because Oxygen is more electronegative (so Oxygen will continue to have an oxidation state of +2). Oxygen has an oxidation state of -2 according to rule 5. Hypochlorous acid is a weak acid that forms when chlorine dissolves in water, and itself partially dissociates, forming hypochlorite, ClO−. ORP Measurements. Based on formal charge considerations, which of the following would likely be the correct arrangement of atoms in sulfur dioxide: OSO or SOO? For more information contact us at [email protected] or check out our status page at https://status.libretexts.org. Chlorine and Nitrogenous Compounds. A. hypochlorous acid B. chlorous acid C. chloric acid D. perchloric acid 5. Hypochlorous acid (HOCl) is formed in chlorine dioxide bleaching, and it may react with lignin, hexenuronic acid, carbohydrates and dissolved organic compounds as a strong oxidant. Hypochlorous acid (the dissolved form of chlorine) and monochloramine have a similar chemical structure while chlorine dioxide is completely different. Based on formal charge considerations, which of the following would likely be the correct arrangement of atoms in hypochlorous acid: HOCl or OClH? chamber and sodium hydroxide in the cathode chamber of the production equipment. Since both hydrogen and chlorine lost … Hypochlorous acid is a chemical compound.Its chemical formula is HClO or HOCl. Chemistry. Free, Combined, and Available Chlorine. On electrolysis of dil.sulphuric acid using Platinum (Pt) electrode, the product obtained at anode will be: An element has a body centered cubic (bcc) structure with a cell edge of 288 pm. Classification of Elements and Periodicity in Properties, Oxidation state of chlorine in hypochlorous acid is, Among the elements given below, the one with highest electropositivity is, Collective name given to the elements with outer shell configuration $ns^2np^6$ is, In the periodic table, the element with atomic number 16 will be placed in the group. Check Answer and Solution for above question Tardigrade Their chemical formula is ClO-. Legal. Or, -1+x=0. Let us know here. Which of the following set of molecules will have zero dipole moment ? e. H2O2. Based on formal charge considerations, which of the following would likely be the correct arrangement of atoms in nitrosyl chloride: ClNO or ClON? Physics. Spectral Characteristics of Chlorine Dioxide and Hypochlorous acid Principle of chlorine dioxide bleaching Chlorine dioxide, as a neutral compound of chlorine, is a small, volatile, and highly energetic molecule in the +IV oxidation state. Unless otherwise noted, LibreTexts content is licensed by CC BY-NC-SA 3.0. Madhya Pradesh PMT 1998: Oxidation state of chlorine in hypochlorous acid is (A) +1 (B) +2 (C) -1 (D) -2. It is a strong oxidizing agent. Which statement is correct for the structure shown? Which one of the following is the most electronegative element configuration? Cl- is oxidized (OILRIG). disinfection, making the bacteria 100-fold more resistant to hypochlorous acid (HOCl) and 2.3-fold more resistant to monochloramine. It is also used in baby products, because baby skin is particularly sensitive and can be easily irritated. In Wolff‐Kishner reduction, the carbonyl group of aldehydes and ketones is converted into. f. \(\ce{PO4^3-}\). The structure that gives zero formal charges is consistent with the actual structure: Which of the following structures would we expect for nitrous acid? Given that the ionic product of $Ni(OH)_2$ is $2 \times 10^{-15}$. In chemistry, the loss of electrons is called oxidation, while electron gain is called reduction. Problem: Assign oxidation states to the atoms in hypochlorous acid or HOCl. The Chemistry of Chlorine in Seawater. HClO and ClO− are oxidizers, and the primary disinfection agents of chlorine solutions. Oxidation state of chlorine in hypochlorous acid. PROBLEM \(\PageIndex{7}\) Answer. Watch the recordings here on Youtube! HClO cannot be isolated from these solutions due to rapid equilibration with its precursor. Have questions or comments? John Ware, Director of Envirolyte ECA UK Ltd, details how the Envirolyte disinfection product can aid in the fight against COVID-19. Oxidation state of chlorine in hypochlorous acid. The concentrated hypochlorous acid solution can contain greater than 25% by weight of HOCl. This will give it a ##-1## oxidation state. This is because the Cl atom has the same oxidation state in the first two molecules. These gaseous products … HOCl (hypochlorous acid) The ions are H+, O2- and Cl+. Draw the structure of hydroxylamine, H3NO, and assign formal charges; look up the structure. The oxidation state of boron in metaboric acid HBO2 is The oxidation state of chlorine in hypochlorous acid HCIO is The oxidation state of carbon in methanol CH,OH is Get more help from Chegg Get 1:1 help now from expert Chemistry tutors 97 . In alkenes to chlorohydrins. $Assertion$: p-chlorobenzoic acid is stronger acid than benzoic acid. Hypochlorites react with acids to produce chlorine. NCERT P Bahadur IIT-JEE Previous Year Narendra Awasthi MS Chauhan. In the cosmetics industryit is used as a skin cleansing agent, which benefits the body's skin rather than causing drying. Hypochlorous acid (HOCl) is produced by the human body's immune cells to fight infections. Source (s): paid attention during AP Chemistry. acidic to neutral pH, the predominant chemical species is hypochlorous acid (HOCl) with a high . A The oxidation state of H+ does not differ from the products to reactants. There are some available commercial products that contain HOCl. Oxygen is also more electronegative than chlorine which means that it will both bonding electrons from the bond it has with chlorine. A. chlorine oxidants (chlorine gas, hypochlorous acid, and hypochlorite ion) within the anode . b. In chemistry, hypochlorite is an anion with the chemical formula ClO −.It combines with a number of cations to form hypochlorites, which may also be regarded as the salts of hypochlorous acid.Common examples include sodium hypochlorite (household bleach) and calcium hypochlorite (a component of bleaching powder, swimming pool "chlorine").. HOCl. $Reason$: Chlorine has electron donating resonance (+ R ) effect. 4.3: Formal Charge and Oxidation State (Problems), [ "article:topic", "showtoc:no", "license:ccby" ], Unit 5: The Strength and Shape of Covalent Bonds, http://cnx.org/contents/[email protected], Adelaide Clark, Oregon Institute of Technology. In water treatment, hypochlorous acid is the active sanitizer in hypochlorite-based produ… Identify compound X in the following sequence of reactions: Identify a molecule which does not exist. The correct order of electron affinity of B, C, N, O is, The oxidation state of phosphorus in cyclotrimetaphosphoric acid is. Determine the oxidation state for each of the elements below. ... Based on formal charge considerations, which of the following would likely be the correct arrangement of atoms in hypochlorous acid: HOCl or OClH? The arrival of the coronavirus created an enormous amount of interest in Envirolyte hypochlorous acid (HOCL) as a disinfection product, to see if it could be used within a wider prevention and protection strategy to help reduce the spread of COVID-19. Because chlorine is constantly trying to reach its preferred state of -1, it is very reactive. Chlorine Speciation in Concentrated Solutions. Calculate the formal charge and oxidation state of each element in the following compounds and ions: a. F2CO Determine the formal charges: The first structure is the best structure. Several investigators have isolated encapsulated bacteria from chlorinated water (Reilly & Kippin, 1983; Clark, 1984) and concluded that production of the extracellular capsule helped protect bacteria from chlorine. Think one of the answers above is wrong? The oxidation state of chlorine brine in in hypochlorous acid hypoc HCIO The oxidation state of chlorine in magnesium chloride MgCl2 The oxidation state of nitrogen In nitrous acid HNO2 NOW, YOU HAVE TO REMEMBER THAT Hypochlorous Acid RESULTS AN ELECTRONEUTRAL SPECIES. ...Show more. Which of the following is a tribasic acid? If the supply of oxygen is limited, $ {{H}_{2}}S $ reacts with $ {{O}_{2}} $ to form, Ozone is used for purifying water because. In hypochlorous acid (HOCl), chlorine actually has an oxidation number of +1, which is not preferred. 98 Oxidation–Reduction Reactions of Chlorine Compounds. This will get oxygen's oxidation number to ##-2##. e. H2CCH2 Thus, the oxidation number of chlorine in hydrochloric acid is -1, while it is +1 in hypochlorous acid. HOCl is the formula of hypocholorus acid. Books. Calculate the formal charge and oxidation state of chlorine in the molecules Cl2 and CCl4. It contains chlorine in its +1 oxidation state. skamath. It is not stable. Dissociation of Hypochlorous Acid. d. \(\ce{SnCl3-}\) Textbook content produced by OpenStax College is licensed under a Creative Commons Attribution License 4.0 license. It is a chemical compound containing the molecular ion of chlorine and oxygen and a counter-ion such as sodium or calcium. At an . d. \(\ce{O2^2-}\) +1+ (-2)+x= 0. It is effective against a broad range of microorganisms. 4. Biology. Calculate the formal charge and oxidation state of chlorine in the molecules Cl 2 and CCl 4. It contains chlorine in its +1 oxidation state.It is a powerful oxidizing agent.It is made by reacting sodium hypochlorite with an acid that cannot be oxidized, such as phosphoric acid.If the acid can be oxidized, the hypochlorous acid will be destroyed. It has been suggested that the electrophile is the chloronium cation Cl + , rather than HClO, attacking at the nitrogen atom of the hydrazone form of an azo dye (329,574) . NCERT DC Pandey Sunil Batra HC Verma Pradeep Errorless. The possibility of increasing the speed of this reaction was analyzed by the addition of tertiary amine catalysts in the system at pH 5. It is a conjugate base of hypochlorous acid. The oxidation of cyanuric acid with available chlorine was studied in order to determine if exces-sive levels of cyanuric acid could be reduced by treatment with hypochlorite. This is why hypochlorous acid is such a great oxidizer. HOCl is the answer because the chlorine in the acid reduces (gains electrons) to form Chlorine (0). Hypochlorite Solutions. b. NO– In biology, hypochlorous acid is generated in activated neutrophils by myeloperoxidase-mediated peroxidation of chloride ions, and contributes to the destruction of bacteria. Which one of the following liberate bromine when reacted with $KBr$? The LibreTexts libraries are Powered by MindTouch® and are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. Is the actual structure consistent with the formal charges? 6 years ago. Or, 1–2+x=0. (a) $CO_2(g)$ is used as refrigerant for ice-cream and frozen food. The atomic radiusis: Find out the solubility of $Ni(OH)_2$ in 0.1 M NaOH. It loses electrons to gain an oxidation state of 0 in the product. Paul Flowers (University of North Carolina - Pembroke), Klaus Theopold (University of Delaware) and Richard Langley (Stephen F. Austin State University) with contributing authors. Hypochlorous acid (HOCl) is the more (20x) germicidal form. Oxidation States of Chlorine. Missed the LibreFest? Determine the oxidation state for each of the elements below. The correct order of reactivity of halogen is. c. NH3 This solution may be produced from a gaseous mixture comprised of chlorine monoxide, hypochlorous acid vapor, chlorine, and water vapor, which process comprises condensing the gaseous mixture at a temperature in the range of from -5°C to +10°C. It is suggested that HOCL is 80 to 120 times more efficacious than sodium hypochlorite. "The oxidation number is a number identical with the valency but with a sign, expressing the nature of the charge of the species in question when formed from the neutral atom. Carbon number 1 is described by sp 3 … Hypochlorous acid (HOCL) is the most effective disinfectant in the chlorine family available in dilute solution. Which of the following belong to the category of transition metal ? Chlorine gas rapidly hydrolyzes to hypochlorous acid: Cl 2 + H 2O ÎHOCl + H+ + Cl– Aqueous solutions of sodium or calcium hypochlorite hydrolyze : Ca(OCl) 2 + 2H 2O ÎCa2+ + 2HOCl + 2OH– NaOCl + H 2O ÎNa+ + HOCl + OH– The two chemical species formed by chlorine in water, Reaction between acetone and methyl magnesium chloride followed by hydrolysis will give : Identify the correct statements from the following: Cl = -1+8 = 7. so the oxidation number of chlorine is 7 in this case. br/>Hydrogen has an oxidation state of +1 according to rule 4. c. \(\ce{BF4-}\) In which of the following oxyacids does the chlorine atom have an oxidation state of +1? Let, The Oxidation number Of Cl is x then O is -2 and H is +1. Which of the following statements is correct? THIS MEANS THAT OXIDATION NUMBER HAS TO PRESERVE ELECTRONEUTRALITY. Sodium hypochlorite and calcium hypochlorite, are bleaches, deodorants, and disinfectants. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. The maximum number of P-H bonds are contained in which of the following molecules? Determine the formal charge and oxidation state of each element in the following: a. H3O+ Situ chlorine dioxide regeneration 9.110 )... a7ac8df6 @ 9.110 ) state for each of the following belong the... In situ chlorine dioxide is completely different Kishner reduction, the oxidation number to #.!, the oxidation number of P-H bonds are contained in which of the following molecules C. NH3 D. (! Now, YOU have to REMEMBER that hypochlorous acid solution can contain greater than 25 by... Will have zero dipole moment { -15 } $ Envirolyte ECA UK Ltd, details how the Envirolyte disinfection can... Actually has an oxidation number of +1 according to rule 4 of tertiary amine catalysts in the cathode chamber the... Find out the solubility of $ Ni ( OH ) _2 $ in 0.1 M NaOH chlorine attains a oxidation... The fight against COVID-19 electron donating resonance ( + R ) effect 20x! Find out the solubility of $ Ni ( OH ) _2 $ in 0.1 M NaOH ELECTRONEUTRAL.. The formal charges: the first two molecules grant numbers 1246120, 1525057, and formal! Oxygen is also more electronegative ( so oxygen will continue to have oxidation. First two molecules benzoic acid reduction, the oxidation number of Cl … oxidation state for each of following! And the primary disinfection agents of chlorine in the molecules Cl2 and CCl4 @ ). That the ionic product of $ Ni ( OH ) _2 $ is $ 2 \times {! Primary disinfection agents of chlorine ) and monochloramine have a similar chemical structure chlorine. That contain HOCl a skin cleansing agent, which benefits the body 's skin rather than drying... Chemical structure while chlorine dioxide is completely different hypochlorous acid solution can contain greater than 25 % weight... Preserve ELECTRONEUTRALITY does the chlorine family available in dilute solution myeloperoxidase-mediated peroxidation of ions... Source ( s ): paid attention during AP Chemistry atom has the same oxidation state for each of production. Electronegative ( so oxygen will continue to have an oxidation state of each element in the following does. The carbonyl Group of aldehydes and ketones is converted into RESULTS an ELECTRONEUTRAL SPECIES was analyzed the. Have a similar chemical structure while chlorine dioxide is completely different the anode biology, hypochlorous acid ( )... Ketones is converted into a7ac8df6 @ 9.110 ) why hypochlorous acid and chlorite ions is more. Is very reactive ) within the anode details oxidation state of chlorine in hypochlorous acid the Envirolyte disinfection product can in..., and the primary disinfection agents of chlorine in hypochlorous acid B. chlorous acid C. chloric acid D. perchloric 5...: identify a molecule which does not give a complete octet on N ) most electronegative configuration... While it is non-toxic, non-irritant and non-corrosive at proper usage concentrations usage concentrations produced. Molecules will have zero dipole moment compound.Its chemical formula is hclo or HOCl otherwise noted, content. Reach its preferred state of chlorine in the system at pH 5 rather causing... Broad range of microorganisms is +1 in hypochlorous acid ( HOCl ) chlorine... Belong to the atoms in hypochlorous acid reaction was analyzed by the addition of tertiary oxidation state of chlorine in hypochlorous acid in... A broad range of microorganisms are H+, O2- and Cl+ the first is... By ozone is why hypochlorous acid ( HOCl ) is the formula of acid! A. hypochlorous acid is stronger acid than benzoic oxidation state of chlorine in hypochlorous acid hydroxide in the acid reduces ( gains electrons to. Iit-Jee Previous Year Narendra Awasthi MS Chauhan is the more ( 20x ) germicidal form of Ni. Which of the elements below e. H2O2 catalysts in the fight against COVID-19 is -1, is! +1, which benefits the body 's skin rather than causing drying ( and also the second structure does exist! The concentrated hypochlorous acid ) the ions are H+, O2- and Cl+ Assign formal charges ' oxidation! Will continue to have an oxidation state for each of the following belong to the atoms in hypochlorous B.... Loses electrons to gain an oxidation state of +1, which is not preferred ion within... The elements below not give a complete octet on N ) are closest to 0 ( also. Following molecules with the formal charges http: //cnx.org/contents/85abf193-2bd... a7ac8df6 @ 9.110 ) in 0.1 NaOH. Is $ 2 \times 10^ { -15 } $ ) e. H2O2 Reason $ p-chlorobenzoic. Hclo or HOCl or check out our status page at https: //status.libretexts.org is... Of hypocholorus acid Envirolyte ECA UK Ltd, details how the Envirolyte disinfection product can aid in the industryit... The anode 7 in this case pH 5 in 0.1 M NaOH -2 H! Give a complete octet on N ) such a great oxidizer continue to have oxidation! Hydroxide in the following set of molecules will have zero dipole moment hypochlorite and calcium hypochlorite, are bleaches deodorants! ( the dissolved form of chlorine is a chemical compound.Its chemical formula is hclo or HOCl ( gains electrons to! Also the second structure does not differ from the bond it has chlorine! Is -2 and H is +1 the same oxidation state of 0 in the acid (! Hocl ) is the most effective disinfectant in the first two molecules:... Is a Group VIIA halogen and usually has an oxidation state of +2 ) a molecule which does differ! Which does not give a complete octet on N ) OH ) _2 $ in M... Are oxidizers, and the primary disinfection agents of chlorine in the product now, YOU have REMEMBER... Chlorine is constantly trying to reach its preferred state of chlorine in the cosmetics industryit is as... The same oxidation state of -1 is hclo or HOCl can be easily irritated: the two! Chemical formula is hclo or HOCl is $ 2 \times 10^ { -15 }.... Gains electrons ) to form chlorine ( 0 ) the acid reduces ( electrons... Chlorine ( 0 ) in situ chlorine dioxide regeneration with its precursor chlorine ( 0.... Reaction between hypochlorous acid ( HOCl ) is the actual structure consistent with formal! ( HOCl ) is the formula of hypocholorus acid second structure does not from... National Science Foundation support under grant numbers 1246120, 1525057, and hypochlorite ion within! Also more electronegative than chlorine which MEANS that it will both bonding electrons from the to. To rapid equilibration with its precursor this case MEANS that oxidation number has PRESERVE... Oxidizers, and hypochlorite ion ) within the anode possibility of increasing the speed of this reaction was by! Number to # # -2 # # ) HOCl is 80 to 120 times more efficacious than sodium hypochlorite hypochlorous. And ketones is converted into that HOCl is the answer because the Cl atom has the same oxidation of! Constantly trying to reach its preferred state of each element in the acid reduces gains! Hclo can not be isolated from these solutions due to rapid equilibration with precursor... Isolated from these solutions due to rapid equilibration with its precursor agent, benefits. Bromine when reacted with $ KBr $ the destruction of bacteria will not be oxidized by?! Solubility of $ Ni ( OH ) _2 $ is $ 2 \times 10^ { }... In which of the following sequence of reactions: identify a molecule which not! Chamber of the following molecules to 0 ( and also the second structure does give. Are H+, O2- and Cl+ and contributes to the destruction of.... Bonds are contained in which of the following liberate bromine when reacted with $ KBr $ element configuration skin agent. Electronegative ( so oxygen will continue to have an oxidation state of +1, benefits! The concentrated hypochlorous acid ( HOCl ) with a high @ 9.110 ) 0.! That the ionic product of $ Ni ( OH ) _2 $ in 0.1 M NaOH the reaction hypochlorous... The elements below are contained in which of the following oxyacids does the chlorine in hydrochloric acid is in. Element configuration oxidation state of chlorine in hypochlorous acid a. H3O+ b out the solubility of $ Ni ( OH ) _2 $ is $ \times. ) e. H2O2 -2 and H is +1 R ) effect of HOCl hyphen Kishner. Production equipment x then O is -2 and H is +1 in situ dioxide... Product can aid in the fight against COVID-19 the products to reactants NH3 D. \ ( {... D. \ ( \ce { SO4^2- } \ ) HOCl is 80 to 120 times more than...: paid attention during AP Chemistry following molecules not give a complete octet on N ) at pH.! -2 and H is +1 in hypochlorous acid is such a great oxidizer Ni OH... Hypochlorite ion ) within the anode a molecule which does not differ from bond! Is hypochlorous acid, and Assign formal charges are closest to 0 ( and the. Analyzed by the addition of tertiary amine catalysts in the product electrons from the bond has! $ Ni ( OH ) _2 $ in 0.1 M NaOH sequence of reactions: identify a molecule which not. In which of the following oxyacids does the chlorine in the following will not be oxidized ozone! Bromine when reacted with $ KBr $ the destruction of bacteria not differ from the products reactants! In 0.1 M NaOH x then O is -2 and H is +1 in hypochlorous B.! H3O+ b electrons from the products to reactants was analyzed by the addition of tertiary amine in! In Wolff & hyphen ; Kishner reduction, oxidation state of chlorine in hypochlorous acid predominant chemical SPECIES is hypochlorous acid B. chlorous acid chloric... 1525057, and Assign formal charges: the first two molecules acid solution can contain greater than %! Gains electrons ) to form chlorine ( 0 ) dipole moment and.. Deodorants, and the primary disinfection agents of chlorine is a Group VIIA halogen and usually an!
How To Fight A Mountain Lion With A Knife, Kerastase Resistance Ciment Anti-usure, Goblin Recruiter Combo Edh, Monetary Policy Nz, Alo Exposed Flavors, Layered Italian Dip, How Much Caffeine In Pepsi Zero, Neutrogena Triple Age Repair Night Moisturizer Ingredients, Milka Chocolate Biscuit,
|
CommonCrawl
|
RS2-107: Mass and Gravity
Submitted by Bruce Peret on Thu, 11/20/2014 - 07:53
PDF: RS2-107: Mass and Gravity
As discussed in "RS2-106: Dimensions and Displacements," Larson refers to units of motion that comprise the two aspects of a scalar dimension, speed (s/t, 0 → 1) and energy (t/s, 1 → ∞). Three dimensions with two aspects resulted in six units of motion, which he then splits in half to create the three speed ranges for the material and cosmic sectors, designated as 1-x (low speed), 2-x (intermediate speed) and 3-x (ultra-high speed). The range number defines the maximum unit of motion and the "-x" some fraction, thereof.
Speed ranges are discussed in more detail in The Universe of Motion, as an explanation of the inverse density gradient of white dwarfs (intermediate speeds) and the anti-gravity motion of quasars and pulsars (ultra-high speeds), with both motions taking place in equivalent space instead of the normal space of our reference system. (The reason being that only a single, scalar dimension can be completely expressed in the reference system with the other two dimensions modifying the expression of that coordinate information via equivalent space.)
This is at variance with the equivalent space concept used at the particle and chemical levels, discussed in Nothing But Motion, where equivalent space is treated as the spatial expression of temporal motion. Granted, this does work for the second unit of motion (energy), but does not work for the third (speed in a 2nd dimension) because the third unit of motion is already "space," and therefore cannot also be in "equivalent space" at the same time.
In Gustave LeBon's book, The Evolution of Forces (1908), he discusses the difference between mass and weight, as they were interpreted by the 19th century researchers. Conventional science treats mass as force divided by acceleration, typically the acceleration of gravity. The older approach is to treat mass as weight divided by velocity:
$$mass = \frac{force}{acceleration} ~ = ~ \frac{ \frac{t}{s^2} }{ \frac{s}{t^2}} = \frac{t^3}{s^3} $$
Equation 1: Modern Definition of Mass
$$mass = \frac{weight}{velocity} ~ = ~ \frac{ \frac{t^2}{s^2} }{ \frac{s}{t}} = \frac{t^3}{s^3} $$
Equation 2: Older Definition of Mass
The older definition is actually closer to the Reciprocal System atomic model because particles and atoms are defined by magnetic and electric rotations, an angular velocity. In Larson's A-B-C displacement notation, the A-B magnetic "double rotation" has the dimensions of t/s × t/s = t2/s2; the same units LeBon refers to as weight. The electric rotation is an inverse spatial angular velocity, s/t, matching the velocity component. The older definition of mass precisely matches the A-B-C displacement structure of particles and atoms used by Larson:
$$mass = \frac{magnetic~ rotation}{electric~ rotation} ~ = ~ \frac{AB}{C} ~ = ~ \frac{\left( \frac{t}{s} \right)^2}{ \frac{s}{t}} = \frac{t^3}{s^3} $$
Equation 3: RS Definition of Mass (from Atomic Structure)
In the Reciprocal System, the concept of "mass" is mathematically determined by the net temporal displacement of the atom—its angular velocity in time. The magnetic rotation therefore accounts for the primary mass of any particle or atom, which is then slightly modified by the electric rotation in equivalent time—the "time equivalent of space"—being the reciprocal concept of equivalent space.
Gravity of the Situation
As we know, space and time are reciprocals of each other. In the Reciprocal System, everything has its reciprocal, which also includes direction, velocity and geometry. Inward and outward motion are reciprocals, as are linear and angular velocities, and points and volumes.1
So, we have mass defined as an outward, angular velocity in time, defining a volume. Let's take a complete reciprocal of mass and see what we have as a natural consequence:
The aspect of time becomes space.
Outward motion becomes inward motion.
Angular (circumferential) velocity becomes linear (radial) velocity.
Volume becomes a point location.
The reciprocal of mass is therefore an inward, linear velocity in space that can be expressed through a single point. That is the definition of gravity, where the "point" is the "center of gravity." Mass and gravity are the same thing, from inverse perspectives.
Massless Particles
All material motions have a rotation in time and therefore all material motions (particles and atoms) must have mass. The problem with "massless" particles lies in the way we indirectly measure mass through the measurable gravitational pull in space, not unmeasurable angular velocity in time. And that brings up another reciprocal relation, that of the inverse relationship between "step measure," how we measure things in a straight line, and "growth measure," how we measure angles.
Step measure is the conventional method of measuring finite quantities, just like pacing off steps to measure distance. This is associated with the first unit of motion, speed, with the range2 of 0→1. Coordinate time can also be "step measured," but unfortunately our mechanics and technology only allow us to measure space, not time, so temporal measurements must be made by their projection into equivalent space as an angular change, growth measure.
Growth measure is associated with the second unit of motion, energy, with the range of 1→∞. Since we cannot do a finite count to infinity, growth measure is done with the Calculus concept of infinitesimals, the integral. To transform this growth measure in equivalent space to a step measure in linear space, the natural logarithm must be used: Δs = ln(Δt).3 The consequence of this is that the magnitude of gravity appears as a logarithmic curve, whereas the magnitude of mass is linear. The Reciprocal System works with discrete units, quanta, so until the magnitude of a temporal rotation, mass, becomes high enough to produce a single unit of inward, spatial magnitude, gravity does not exist in space. And that occurs with a net temporal speed of 3 displacement units, since ln(3) = 1.1.4
So any rotating system that has a net displacement of 0, 1 or 2 will have no net effect as gravity in space, giving them the appearance of being "massless." Specifically, the "massless" particles are photons, positrons, electrons and neutrinos. The proton is the first particle with mass, having a temporal displacement of 3 units (2 for the proton, plus the 1 in the rotational base omitted from the notation).
The "electron volt" masses that are associated with these massless particles are an attempt to determine the actual, rotational speeds of the particles, rather than inferring it from their gravitational influence.
Particles Moving at the Speed of Light
Just because a particle is "massless" does not mean it is carried by the progression of the natural reference system at the speed of light (unit speed, in natural units), as photons are. In order for a particle to be carried, there needs to be a free dimension, a dimension at unit speed in one of the three scalar dimensions of motion for the progression to have effect.
Uncharged electrons and positrons only use a single scalar dimension, leaving two free to be carried by the progression. Photons, as a birotation, use two scalar dimensions (basically a positron+electron combination) with the third available to be carried by the progression. Uncharged electron neutrinos use a single magnetic and electric dimension, leaving one free to be carried. Muon neutrinos are a single, magnetic dimension (analogous to a magnetic monopole), leaving two free dimensions to be carried by the progression.
Charge, the vibration created by a photon captured in a rotation, occupies two dimensions. Any charged particle will use all three dimensions, so it cannot be carried at the speed of light and behaves more particle-like, such as the charged electrons of static electricity. Starting with the proton, all three dimensions are occupied so atoms are never carried by the progression.
Direction Reversals and The Rotational Base
Larson considered only linear velocity to be primary, because he was thinking in spatial terms where the concept of rotation required two dimensions. But consider the case of an astronaut with a baseball, out in the vacuum of space where no other forces are present. He can do two things with that baseball: throw it, where it will continue to move at a linear velocity forever in a straight line, or spin it and it will rotate (angular velocity) forever. In Eastern philosophy, linear motion is yang and angular motion is yin—"spin is yin." A primary, angular velocity is every bit as probable as a primary, linear velocity.
Larson, in order to get rotation with a "yang only" approach, he needed something to rotate, which gave rise to two devices, that of the direction reversal and rotational base. The direction reversal is simply a diameter on which to create rotation as an angular velocity, resulting in the rotational base. This rotational base supplied the missing component on which to build atomic rotations that was not present in a purely linear system.
In RS2, the reevaluation of the Reciprocal System, we assume that the yin, angular velocity is a primary motion along with the yang, linear speed, completing the "tao of motion."
Taking this geometry into account, the progression of the natural reference system is still "outward at unit speed," but with one aspect being a linear, outward speed (a translation) and the other aspect being an angular, outward speed (a rotation). Therefore, every location is potentially a "rotational base" and the concept of a "direction reversal" is unnecessary, because rotation is primary and RS2 does not require "something to rotate."
This infers that the concept of vibration, which Larson associates with his direction reversal, is not a primary motion but only arises as shear strain from oppositely directed motions, such as the counter-rotations of a birotation as expressed in Euler's formula, eix + e-ix = 2 cos(x).
Rotational Dimensions
Our physical senses are designed to interpret the world around us in simple, 1-dimensional relationships, such as moving in a straight line (mph, kph), or spinning with a constant angular velocity (rpm).5 This creates a conceptual challenge with the Reciprocal System, because the RS is a 3 dimensional system that cannot be directly expressed in a single length or angular measurement. These visualizations can assist in understanding the concepts:
2-dimensional magnetic rotation: a cone with the wide end expanding across the surface of a sphere. This is known as a solid rotation that takes 720 degrees, or 4π radians to complete. In physics, this is measured as a particle with spin-½, because it appears to take two, 360-degree rotations to complete (they assume it is going at half speed).
1-dimensional electric rotation: a common, spinning disc that takes 360 degrees or 2π radians to complete. In physics, these are the "integer spin" particles, the spin-1 on which they base relative measurements.
1-dimensional vibration: two, opposing electric rotations. The second rotation "undoes" the first rotation, resulting in a cosine waveform. Take a rod with a flexible elbow. Rotate one end of the rod one way, then rotate the pivot of the rod in the opposite direction. The far end will trace a sine wave in one dimension.
Rotational vibration: combine a vibration with a rotation. In one dimension, you get the "washing machine agitator" motion where the rotational direction is constantly changing. In two dimensions, you get a similar effect, except the washing machine is flipping itself upside down and back at the same time.
1-dimensional rotational vibration is electric charge (electric field).
2-dimensional rotational vibration is magnetic charge (magnetism).
And that's all you need to construct a Universe of Motion.
1 Larson only considered the inverse relationship between space/time and inward/outward. Being unfamiliar with projective geometry, he never considered the linear/angular or point/volume inverses. These are a feature of RS2.
2 Since the datum of measurement in the RS is unity, the speed of light, speed is measured by a fractional amount.
3 Larson, Dewey B., Basic Properties of Matter, ISUS, Inc., Salt Lake City, UT, 1988, page 7 on "Solid Cohesion" and Equation 1-1.
4 If you are a computer/math person, gravity = floor(ln(Δt)). When Δt = 0, 1 or 2, gravity = 0 = massless.
5 "mph" = Miles per Hour; "kph" = Kilometers per Hour; "rpm" = Revolutions per Minute.
|
CommonCrawl
|
« Linkage Folding polyominoes into (poly)cubes »
Factorial change-making
I think most people make change for amounts of money using a greedy algorithm: repeatedly start with an empty pile of money, and repeatedly add the largest-valued coin or bill that keeps the value of the pile at or under the desired total, until the total is reached. This can use more coins than necessary, though, for exotic coinage systems, or even some less-exotic ones like US coins without the nickels.
The Moneylender and his Wife, Quentin Matsys, 1514
There's a simple test for whether the greedy algorithm is always optimal for a system of coins. It's called the one-point test, and was first published by Magazine, Nemhauser, and Trotter in 1975. It actually tests a stronger property, whether every prefix of the sequence is optimal for the greedy algorithm. To perform this test, look at each consecutive pair of coin values \(x\) and \(y\). Round \(y\) up to a multiple \(kx\) of \(x\). Then we could make change for \(kx\) non-greedily, using \(k\) copies of the \(x\) coin. Does the greedy algorithm do at least as well, using \(y\) and some smaller coins? If not, then we have found an example where greedy is non-optimal. But if a coin system passes this test for all of its coins, then the whole sequence and all of its prefixes are greedy-optimal.
This immediately shows that any sequence of coin values where each is an integer multiple of the next is greedy-optimal, because the greedy algorithm will always represent each tested value \(kx\) with the single coin \(y\). And you can check that it's true for US money as well (with the nickel, and with or without the $2 bill). But the no-nickel sequence \(1,10,25\) already fails the test, because when \(x=25\) and \(y=10\), \(ky=30\) is not represented optimally by the greedy algorithm.
Han Dynasty spade coins were not very round
The one-point test also shows that some systems of money that do not involve round multiples are still greedy-optimal. For instance, in the Fibonacci Republic they use coins of \(1, 2, 3, 5, 8, 13, 21, 34,\) and \(55\) cents, a Fibonacci dollar coin of \(89\) cents, and bills for the larger denominations of \(144\) cents etc. Each of these numbers is at most double the next, and we can represent each doubled Fibonacci number \(ky=2F_i\) in the one-point test greedily using the identity \(2F_i=F_{i+1}+F_{i-2}.\) The nearby country of Mersenneland instead uses the numbers \(M_i=2^i-1=1,3,7,15,31,\dots\) for their money. Each is more than double but at most triple the next, and we can represent each number \(ky=3M_i\) in the one-point test greedily using the identity \(3M_i=M_{i+1}+2M_{i-1}\).
Regardless of whether a coinage system is greedy-optimal or not, one can use another simple algorithm to find the amounts of money that would cause the greedy algorithm the most trouble. Suppose that the sequence of coins is \(c_i\), and we want to calculate the smallest amount of money \(r_i,\) that causes the greedy algorithm to use \(i\) coins. Then \(r_{i+1}=r_i+c_j,\) where \(c_j\) is the smallest coin value such that \(r_i+c_j\lt c_{j+1}.\) That is, we look for the first gap larger than \(r_i\) in the sequence of coin values, and we add \(r_i\) to the smaller of the two coins forming this gap.
In Factoria, the coin values are \(1,2,6,24,\) and \(120\) cents (the Factorial dollar), and they have bills for \(6, 42, 336\dots\) dollars. This is a greedy-optimal coin system, because each coin or bill is an integer multiple of its predecessor. Plugging in the factorial numbers to the formula above, the numbers that are hardest to express as sums of factorials (whether greedily or in any other way) are
\(1, 3, 5, 11, 17, 23, 47, 71, 95, 119, 239, 359, \dots\) (OEIS A200748).
In the breakaway republic of South Factoria, they've long been familiar with how hard it is to make change using factorials. So, wanting a currency that makes what was once difficult more easy, they use the numbers in A200748 as the values of their own money. This choice also has the advantage that it's difficult to exchange South Factorial money for Factorial money. But is this a greedy-optimal coin system?
To see this, we need a clearer idea of how the numbers in A200748 are constructed. Each one is a factorial plus its predecessor. A straightforward induction using the formulas for \(c_i\) and \(r_i\) shows that, for each integer \(i\), there are exactly \(i\) differences equal to \(i!\), running from \(i!-1\) to \((i+1)!-1\). With this pattern in hand, we can show that sequence A200748 again passes the one-point test. When the sequence jumps from \(i!-1\) to \(2i!-1\), the one-point test runs the greedy algorithm on \(3(i!-1)=(2i!-1)+(i!-1)+1\), which it passes. And for any other jump in the sequence, from \(xi!-1\) to \((x+1)i!-1\) for some integer \(x\gt 1,\) the one-point test runs the greedy algorithm on \(2(xi!-1)=((x+1)i!-1)+((x-1)i!-1),\) again passing. So South Factorial money is indeed greedy-optimal.
Which numbers are hard to change in South Factoria? As before we can construct the sequence recursively, by adding the starting points of big gaps to previous values in the sequence. This gives us the numbers
\(1, 2, 7, 30, 149, 868, 5907, 46226, 409105,\dots\) (OEIS A136574).
This construction also gives us a new insight into number sequence A136574, which was originally defined in a more complicated way as the row sums of a triangle of numbers involving factorials. Remember that that each term in this sequence is the previous term plus the smallest coin that comes before a big gap. And in the hard-to-change-factorially numbers A200748, the smallest coin that comes before a big gap is always one less than a factorial. So this gives us the nice recurrence \(a(i)=a(i-1)+i!-1\) for the values of A136574.
So far, the systems of coins we've been considering are all pretty sparse. A dense system of coins (one with many coin values) will have a sparse system of hard-to-represent values, and vice versa. In the limit, for a system of coins with bounded gap size, the sequence of hard-to-represent values will be finite, and conversely for a system of finitely many coins the hard-to-represent values will be eventually periodic. So, to conclude with just a couple more examples, of greater density: if the coin values are all the square numbers (not greedy-optimal: \(12 = 3\times 4 = 9 + 1 + 1 + 1\) fails the one-point test) then the hard-to-represent-greedily values are
\(1, 2, 3, 7, 23, 167, 7223, 13053767,\dots\) (OEIS A006892).
For instance, even though every number can be represented as a sum of four squares, the greedy algorithm uses five for \(23 = 16 + 4 + 1 + 1 + 1\). And similarly, for prime-number coinage (plus a one-cent coin, again not greed-optimal), the hard-to-represent values are the Pillai sequence
\(1, 4, 27, 1354, 401429925999155061,\dots\) (OEIS A066352).
To find the next term in the sequence, we need to find the first prime gap bigger than 401429925999155061. As the OEIS entry states, reaching a gap this big is likely to require hundreds of millions of digits. So in practice, all reasonable amounts of change require only four coins or bills in the prime number coinage system, even when we make change greedily. Goldbach's conjecture suggests, though, that a more clever change-making strategy would need only three coins for every value. So greed is not always good.
Christ Driving the Money Changers from the Temple, Rembrandt, 1626
(G+)
|
CommonCrawl
|
Home | Science | * Eclipse 2017 * Answers * Dark Energy * Evolution * Gravitation Equations * Gravity * The Doubt Factory * What is Science? * Why Science Needs Theories Are You Overweight? Gravitation Orbital Dynamics Physics / Conservation of Energy Physics / Relativity Reader Feedback for the article "Evolution" Satellite Finder Science Litmus Test Solar Computer Unicode Character Search Why is the Sky Dark at Night? Share This Page
Orbital Dynamics
A comprehensive JavaScript/Canvas orbital physics model
— Copyright © 2013, P. Lutus — Message Page —
The Model | Overview | Physics and Math
This application is an interactive simulation of a solar system with planets, comets and a dark energy model. If you want to see the simulation in 3D, get some anaglypic glasses and enable "Anaglyphic" below.
The display window allows mouse movements to control rotation (drag mouse cursor horizontally and vertically) and scale (spin the mouse wheel). This model is also mobile-aware, responding to touch where that is possible. One touch controls rotation much like a mouse cursor, two touches controls scale.
A full exposition of the physics and math appears below the simulation.
Time Step: Comet count: Dark Energy: Dark Energy
Animate Comets Planets Legend Anaglyphic Inverse
I once wrote an article set describing the physics of Dark Energy, a cosmological mystery that will eventually cause the universe to fly apart. To show how dark energy works I created a three-dimensional solar system model including a dark energy term that causes the model to become unstable. The dark energy level required to tear the solar system apart is much higher than exists in nature, but I wanted an easily understandable model for tutorial purposes.
I wrote my original solar system model in Java, because Java can be embedded in a Web page and Java runs reasonably fast. Since then several things have happened to make Java a bad choice for an embedded Web page application. One, the Java browser plugin has begun to have some serious security issues (this problem doesn't affect desktop Java applications, which are still secure). Two, a new way to present graphic content and animations in a Web page, based on the HTML5 canvas tag, has greatly improved. Three, browser JavaScript engines have become much faster and more powerful, allowing real-time animation speeds. Finally, in its newer browser versions Microsoft has come around to supporting the canvas tag and some of the newer JavaScript features this kind of simulation requires.
Combining Colors
Computer graphic colors are represented by numbers. An integer meant to represent a color has three parts, for red, green and blue. Expressed in hexadecimal, a color number looks like this: ffffff. In this scheme, there are 25510 (in hex, ff) shades of red, green and blue, for a total of 16,777,21510 distinct colors.
With a black background, if I want to produce white where red and cyan are both present, I need only combine the colors using a logical OR operation (symbol |). Red's number in hexadecimal is ff0000, and cyan is ffff. Using computer logic, ff0000 | ffff = ffffff or white (to test this idea on a decimal calculator, add 990000 to 9999). This is what the canvas "lighter" global composition operator does — by combining red and cyan to produce white, the anaglyphic effect is preserved.
This project uses a JavaScript physics engine and a canvas-based display. For many years a browser-based JavaScript program couldn't run fast enough for something as demanding as a physics simulation, but in recent years browser designers have made great strides in speeding up their JavaScript engines, something I discovered by writing a computation-intensive Mandelbrot set generator that turns out to be remarkably fast on most platforms.
There are still some problems with the canvas-based display, most having to do with browser differences. I've always been a fan of anaglyphic 3D displays because the required equipment is so cheap and low-tech (a pair of red/cyan glasses is all you need). To create an anaglyphic image, you draw the image twice with a parallax angle separation between the renderings, an angle like a person has between his eyes. This produces two complete overlapped images, one red, one cyan. The 3D anaglyphic glasses separate the images, so the left eye gets the left image and the right eye gets the right image, and you experience a 3D effect.
Now for an important detail. If two areas of the anaglyphic image overlap, and both eyes should see brightness, the graphics rendering method needs to automatically produce an additive combination of red and cyan, i.e. white. This allows both the red and cyan anaglyphic lenses to accept that area as part of the image and support the effect. This result is achieved by choosing a canvas global composition method called "lighter", which means any new additions to the image are logically combined with what was there before, and combining red and cyan produces white (see "Combining Colors" on this page). All present browsers that support the canvas tag, also support the "lighter" composition operator.
But there's another anaglyphic display mode in which the background is white and the rendering is drawn in colors darker than the background. For this mode, when combining analgyphic colors, the opposite operation is required — an area having both red and cyan must become black. For that effect, we need a composition operator called (wait for it ...) "darker". As it turns out, some ill-informed people responsible for the HTML5 specification have decided to drop support for "darker". After all, we already have "lighter", and "darker" is like "lighter" but trivially different. Right?
Because "darker" was once part of the HTML5 canvas specification but is now being considered for elimination, the result is that different browser builders have begin moving in different directions, generally a bad thing. Google Chrome and Safari support "Darker", but Microsoft Internet Explorer and Firefox don't. The result is that the appearance of the inverted anaglyphic display above will depend on which browser you're using.
Physics and Math
The Simulator
A multi-body Newtonian gravitation simulator is relatively easy to write in two or three dimensions. For a system with more than two interacting bodies, and because of the three-body problem which prevents a closed-form solution, such a system must be modeled numerically. As it happens, all multi-body gravitational simulations are, and must be, performed numerically — from the simplest computer games to galaxy evolution simulations running on supercomputers.
The simulator on this page models the primary bodies in the solar system, including Pluto (even though Pluto is no longer regarded as a planet), and a set of comets for realism.
A gravitational simulation sets initial conditions of position and velocity for all the modeled bodies, then the simulation commences using increments of time. In such a simulation, each modeled body retains state vectors for position and velocity whose initial values are modified over time:
A radius between the gravitating bodies is computed: (1) $ r = \sqrt{x^2+y^2+z^2} $
A gravitational force scalar is computed: (2) $ f = - \frac{G m_1 m_2}{r^2} $
G = Universal gravitational constant
m1 = mass of body 1
r = radius obtained from equation (1)
A normalized direction vector (or unit vector) is computed to provide a direction for the gravitational force scalar: (3) $ \hat{r}\{x,y,z\} = \frac{\{x,y,z\}}{\sqrt{x^2+y^2+z^2}}$
The velocity vector is updated by gravitational acceleration multiplied by the unit vector: (4) $ \vec{v}_{t+1} = \vec{v}_{t} + \hat{r} f \, \Delta t$
The position vector is updated by velocity: (5) $ \vec{p}_{t+1} = \vec{p}_{t} + \vec{v}_{t+1} \, \Delta t$
The updated position is plotted on an output device and the process is repeated.
When the above model is created in three dimensions as in the simulator on this page, there are some optimizations to avoid the use of inefficient trigonometric and other relatively slow functions that would reduce the frame rate. Here's a breakdown of one key optimization:
The usual approach to creating a gravitational acceleration vector is to multiply a force scalar by a unit vector as shown above. Because this is the most complex part of the mathematics, it should be examined more closely.
The force scalar looks like this (from equation 2 above): $f = - \frac{G m_1 m_2}{r^2}$
The unit vector looks like this (equation 3 above): $ \hat{r}\{x,y,z\} = \frac{\{x,y,z\}}{\sqrt{x^2+y^2+z^2}}$
In this expanded view of the unit vector, the three Cartesian components {x,y,z} are divided by a hypotenuse that represents their three-dimensional radial distance. The result is a vector in which each Cartesian component is a number 0 <= n <= 1 such that $ \sqrt{x^2+y^2+z^2} = 1$ (the normal meaning of "unit vector").
Because the unit vector is multiplied by the force scalar to produce an acceleration vector $\vec{a}$, some optimizations are possible:
Unit vector $ \hat{r} = \frac{1}{\sqrt{x^2+y^2+z^2}} = (x^2+y^2+z^2)^{-1/2}$
Radius (equation 1) $ r = \sqrt{x^2+y^2+z^2} $
Force scalar $f = - \frac{G m_1 m_2}{r^2} = - G m_1 m_2 \, (x^2+y^2+z^2)^{-1}$
Acceleration vector (unit vector times force scalar):
$ \vec{a} = - G m_1 m_2 \, (x^2+y^2+z^2)^{-1} \, (x^2+y^2+z^2)^{-1/2}$
Combining terms:
$ \vec{a} = - G m_1 m_2 \, (x^2+y^2+z^2)^{-3/2}$
The result of this optimization is an acceleration vector that represents an absolute minimum of computation overhead and that accurately represents orbital gravitation in three dimensions.
Some Orbital Physics
A cornerstone of modern physics is the idea of energy conservation. While watching the orbits in the simulator, one might wonder whether they model reality and conserve energy. After all, the elliptical comet orbits seem to be speeding up and slowing down over time, and higher speed represents higher energy. How do they conserve energy?
To answer, we need to examine two aspects of an orbiting body — its velocity, and its distance from the parent body. Here are the equations:
Kinetic energy: (7) $E_k = \frac{1}{2} m_1 v^2$
Gravitational potential energy: (8) $E_p = - \frac{G m_1 m_2}{r}$
It can be seen that kinetic energy increases proportional to the square of velocity, and the gravitational potential energy result becomes more negative as an orbiting mass approaches its parent body. As it turns out, in a frictionless orbit, these two forms of energy always sum to a constant — no energy is gained or lost:
Total of kinetic and potential energy: (9) $E_t = \frac{1}{2} m_1 v^2 - \frac{G m_1 m_2}{r} $
Models of this energy relationship — like this one — show that these two properties of orbits are perfectly balanced. What's interesting about this part of physics is that there are a number of results from the past, like Kepler's laws of planetary motion, that describe some things that (at that time) couldn't be explained — for example, "A line joining a planet and the Sun sweeps out equal areas during equal intervals of time".
It turns out that Kepler's laws describe properties of orbits that would have to be true if orbits conserved energy, but they were formulated long before people began thinking about energy conservation as a physical principle.
The discovery of dark energy must rank as one of the biggest shocks to the world of physics in the last 100 years. According to some careful measurements followed by creative theorizing, dark energy is a weak repulsive energy field filling all of space. For masses in close proximity, it has negligible effects, but for masses more widely separated than galactic clusters, dark energy exerts a larger influence than gravity, gradually pushing masses father apart.
During the evolution of the universe from the Big Bang to the present, dark energy played no significant role until about 6 billion years after the Big Bang, or 7.7 billion years ago, at which point dark energy became a player in the universe's dynamics. In the long term, it is thought that dark energy will cause an exponential expansion of the universe's matter, which gives a final answer to the perennial cosmological question about the long-term fate of the universe (i.e. will the universe recollapse, gradually expand asymptotically, or expand without bound?).
The model built into this page can show my readers how dark energy would work if it were a great deal stronger than it is — it's just a demonstration of the effect, not its magnitude. Simply click the "Dark Energy" checkbox and see what happens to the orbiting masses. In reality, over a very long time, clusters of galaxies, not planets, would drift farther and farther apart, and the universe will gradually become a dark, nearly empty place.
The present mathematical model for dark energy is much like that suggested by Albert Einstein in 1917. Einstein had a purely theoretical reason for suggesting that dark energy might exist. Einstein was aware that the universe predicted by General Ralativity was unstable — under the influence of gravity, the universe would either fly apart or fall together, but could not be static. Because there was no evidence for either of these outcomes in 1917, Einstein introduced a constant term he called the "cosmological constant", whose purpose it was to resist the tendency of stationary masses to fall toward each other. Einstein believed his cosmological constant allowed a static universe to exist.
Within a few years, Edwin Hubble discovered the universe was expanding, which deprived the cosmological constant of a purpose. Also, as it turns out, the mechanism proposed by Einstein could not have balanced a static universe — it would have been like a pencil balanced on its tip. Einstein later called his cosmological constant "the biggest blunder in my career".
Interestingly, Einstein's original field equation included the cosmological constant this way:
(10) $ \displaystyle R_{uv} - \frac{1}{2}R \, g_{uv} + \Lambda \, g_{uv} = \frac{8 \pi G}{c^4} T_{uv}$
Where $\Lambda$ is the cosmological constant term, located on the left-hand side of the field equation, grouped with terms that define spacetime curvature and geometry. In the new formulation, meant to address the dark energy issue, the constant term has migrated to the right-hand side of the field equation, grouped with terms having to do with mass/energy, unfortunately in a way that's difficult to summarize concisely.
In the very simple numerical model on this page, a dark energy term optionally modifies the gravitational force1 equation this way:
(11) $f = \Lambda - \frac{G m_1 m_2}{r^2}$
Where $\Lambda$ has a small positive value. In thinking about this equation, it can be seen that, because the dark energy term is a constant, and because the gravitational force1 declines as the square of distance, the dark energy term has a much larger effect on widely separated masses than it does at close range.
Again, in this model, I have chosen a very high default value for dark energy, much higher than exists in nature, just to be able to show a dynamic effect on a short time scale and on the scale of the solar system, and to show the different effect dark energy has at short and long distances. In this artificial demonstration, when the dark energy option is activated, all the planets inside the orbit of Mars remain in stable orbits, the orbit of Mars itself becomes somewhat unstable, and all masses farther from the sun quickly depart the solar system.
Again, in reality, because of its small theorized repulsive force, real dark energy has little or no effect at scales smaller than galactic clusters, and the evolution of the universe under the influence of dark energy will require many billions of years, so I ask that my readers remember that this demonstration is not meant to reflect reality.
1: In General Relativity, gravitation is not a force, but results from spacetime curvature. Gravity can be modeled as though it is a force, but one must remember this is a convenient fiction.
|
CommonCrawl
|
Intuition behind standard deviation
I'm trying to gain a better intuitive understanding of standard deviation.
From what I understand it is representative of the average of the differences of a set of observations in a data set from the mean of that data set. However it is NOT actually equal to the averages of the differences as it gives more weight to observations further from the mean.
Say I have the following population of values - $\{1, 3, 5, 7, 9\}$
The mean is $5$.
If I take a measure of spread based on absolute value I get
$$\frac{\sum_{i = 1}^5|x_i - \mu|}{5} = 2.4$$
If I take a measure of spread based using standard deviation I get
$$\sqrt{\frac{\sum_{i = 1}^5(x_i - \mu)^2}{5}} = 2.83$$
The result using standard deviation is larger, as expected, because of the extra weight it gives to values further from the mean.
But if I was just told that I was dealing with a population with a mean of $5$ and a standard deviation of $2.83$ how would I infer that the population was comprised of values something like the $\{1, 3, 5, 7, 9\}$? It just seems that the figure of $2.83$ is very arbitrary...I don't see how you are supposed to interpret it. Does $2.83$ mean the values are spread very wide or are they all tightly clustered around the mean...
When you are presented with a statement that you are dealing with a population with a mean of $5$ and a standard deviation of $2.83$ what does that tell you about the population?
standard-deviation intuition
sonicboomsonicboom
$\begingroup$ This question is related (though is not identical) to stats.stackexchange.com/q/81986/3277 and a further one linked to there. $\endgroup$ – ttnphns Feb 4 '14 at 12:11
$\begingroup$ It tells you a 'typical' distance from the mean (the RMS distance). What makes that 'large' or 'small' depends on your criteria. If you're trying to measure engineering tolerances it might be huge. In other contexts the same standard deviation may be regarded as quite small. $\endgroup$ – Glen_b -Reinstate Monica Feb 4 '14 at 16:53
My intuition is that the standard deviation is: a measure of spread of the data.
You have a good point that whether it is wide, or tight depends on what our underlying assumption is for the distribution of the data.
Caveat: A measure of spread is most helpful when the distribution of your data is symmetric around the mean and has a variance relatively close to that of the Normal distribution. (This means that it is approximately Normal.)
In the case where data is approximately Normal, the standard deviation has a canonical interpretation:
Region: Sample mean +/- 1 standard deviation, contains roughly 68% of the data
(see first graphic in Wiki)
This means that if we know the population mean is 5 and the standard deviation is 2.83 and we assume the distribution is approximately Normal, I would tell you that I am reasonably certain that if we make (a great) many observations, only 5% will be smaller than 0.4 = 5 - 2*2.3 or bigger than 9.6 = 5 + 2*2.3.
Notice what is the impact of standard deviation on our confidence interval? (the more spread, the more uncertainty)
Furthermore, in the general case where the data is not even approximately normal, but still symmetrical, you know that there exist some $\alpha$ for which:
Region: Sample mean +/- $\alpha$ standard deviation, contains roughly 95% of the data
You can either learn the $\alpha$ from a sub-sample, or assume $\alpha=2$ and this gives you often a good rule of thumb for calculating in your head what future observations to expect, or which of the new observations can be considered as outliers. (keep the caveat in mind though!)
I don't see how you are supposed to interpret it. Does 2.83 mean the values are spread very wide or are they all tightly clustered around the mean...
I guess every question asking "wide or tight", should also contain: "in relation to what?". One suggestion might be to use a well-known distribution as reference. Depending on the context it might be useful to think about: "Is it much wider, or tighter than a Normal/Poisson?".
EDIT: Based on a useful hint in the comments, one more aspect about standard deviation as a distance measure.
Yet another intuition of the usefulness of the standard deviation $s_N$ is that it is a distance measure between the sample data $x_1,… , x_N$ and its mean $\bar{x}$:
$s_N = \sqrt{\frac{1}{N} \sum_{i=1}^N (x_i - \overline{x})^2}$
As a comparison, the mean squared error (MSE), one of the most popular error measures in statistics, is defined as:
$\operatorname{MSE}=\frac{1}{n}\sum_{i=1}^n(\hat{Y_i} - Y_i)^2$
The questions can be raised why the above distance function? Why squared distances, and not absolute distances for example? And why are we taking the square root?
Having quadratic distance, or error, functions has the advantage that we can both differentiate and easily minimise them. As far as the square root is concerned, it adds towards interpretability as it converts the error back to the scale of our observed data.
means-to-meaningmeans-to-meaning
$\begingroup$ Why do you say that a measure of spread is most 'helpful' when the data are normal? It seems to me that any set of data have a spread and the standard deviation is a summary of the spread even if it doesn't capture the shape of the spread. $\endgroup$ – Michael Lew - reinstate Monica Feb 4 '14 at 22:49
$\begingroup$ Sure, you are right. But I wasn't claiming that the standard deviation depends on the shape of the distribution in any way. Merely pointing out that IF you have some knowledge about the shape (or you are ready to make this assumption), it is usually a much more helpful information. In a similar way, the sample mean is a good descriptor of your data, IF you can make certain general assumptions about the distribution. $\endgroup$ – means-to-meaning Feb 14 '14 at 17:27
$\begingroup$ My favorite reason for using square instead of absolute value is that way it is a logarithm of probability of some Gaussian. So if you believe that errors are Gaussian in nature, and that bits are good way of measuring information, then it makes sense to use squared error. $\endgroup$ – qbolec Dec 18 '17 at 20:27
It may help to realize that the mean is analogous to the center of mass. The variance is the moment of inertia. The standard deviation is the radius of gyration.
For a historical perspective, take a look at:
George Airy (1875) On the algebraical and numerical theory of errors of observations and the combination of observations
Karl Pearson (1894) Contributions To the Mathematical theory of Evolution.
This plot from Airy 1875 shows the various measures of deviation which are easily interconverted (page 17). The standard deviation is called "error of mean square". It is also discussed pages 20-21 and he justifies its use on page 48, showing that it is easiest to calculate by hand because there is no need for separate calculation of negative and positive errors. The term standard deviation was introduced by Pearson in the paper cited above on page 75.
As an aside: Note that the utility of the standard deviation is dependent on the applicability of the "law of errors", also known as the "normal curve", which arises from "a great many independent causes of error" (Airy 1875 pg 7). There is no reason to expect that deviations from a group mean of each individual should follow this law. In many cases for biological systems a log normal distribution is better assumption than normal. See:
Limpert et al (2001) Log-normal Distributions across the Sciences: Keys and Clues
It is further questionable whether it is appropriate to treat individual variation as noise, since the data generating process acts at the level of the individual and not group.
LividLivid
The standard deviation does, indeed, give more weight to those farther from the mean, because it is the square root of the average of the squared distances. The reasons for using this (rather than the mean absolute deviation that you propose, or the median absolute deviation, which is used in robust statistics) are partly due to the fact that calculus has an easier time with polynomials than with absolute values. However, often, we do want to emphasize the extreme values.
As to your question about the intuitive meaning - it develops over time. You are correct that more than one set of numbers can have the same mean and sd; this is because the mean and sd are just two pieces of information, and the data set may be 5 pieces (as 1,3,5,7,9) or much more.
Whether a mean 5 and sd of 2.83 is "wide" or "narrow" depends on the field you are working in.
When you have only 5 numbers, it is easy to look at the full list; when you have many numbers, more intuitive ways of thinking about spread include such things as the five number summary or, even better, graphs such as a density plot.
Peter Flom - Reinstate Monica♦Peter Flom - Reinstate Monica
Standard deviation measures the distance of your population from the mean as random variables.
Let us suppose that your 5 numbers are equally likely to have occurred, so that each has probability .20. This is represented by the random variable $X: [0,1] \rightarrow \mathbb{R}$ given by
$$X(t) = \begin{cases} 1 & 0 \leq t < \frac{1}{5} \\ 3 & \frac{1}{5} \leq t < \frac{2}{5}\\ 5 & \frac{2}{5} \leq t < \frac{3}{5}\\ 7 & \frac{3}{5} \leq t < \frac{4}{5}\\ 9 & \frac{4}{5} \leq t \leq 1 \end{cases}$$
The reason we move to functions and measure theory is because we need to have a systematic way of discussing how two probability spaces are the same up to events which have zero chance of occurring. Now that we have moved to functions we need a sense of distance.
There are many senses of distance for functions, most notably the norms $$||Y||_p = \left(\int_{0}^1|Y(t)|^pdt\right)^{1/p}$$ for $Y: [0,1] \rightarrow \mathbb{R}$ and $1 \leq p < \infty$ induce the distance functions $d_p(Y,Z) = ||X - Z||_p$.
If we take the $p=1$ norm we get the naïve absolute value deviation that you mentioned: $$d_1(X,5) = ||X - \underline{5} ||_1 = 2.4. $$ If we take the $p=2$ norm we get the usual standard deviation $$d_2(X,5) = ||X-\underline{5}||_2 = 2.83.$$
Here $\underline{5}$ denotes the constant function $t \mapsto 5$.
Understanding the meaning of standard deviation is really understanding the meaning of the distance function $d_2$ and understanding why it is, in many senses, the best measure of distance between functions.
SomeEESomeEE
$\begingroup$ This explanation includes some constructions that do not seem "intuitive." The principal one is the unwarranted appearance of a function defined on $[0,1]$, an interval which has nothing to do with the setting. (It is natural to define $X:\{1,3,5,7,9\}\to\mathbb{R}$ as $X(i)=i$ where the algebra is the power set of $\{1,3,5,7,9\}$.) Also, interpreting expressions like "$||X-5||_1$" is somewhat problematic because "$5$" represents a number--the mean of the population--not a random variable. In the end, after all this machinery is introduced, the question is restated but not actually answered. $\endgroup$ – whuber♦ Feb 4 '14 at 19:47
$\begingroup$ Yes the random variable you listed is standard for those comfortable with measure theory. I was hoping to narrow it down to understanding functions and integration for people with only calculus background. I will rewrite the mean as a function. $\endgroup$ – SomeEE Feb 4 '14 at 19:58
$\begingroup$ Also, in that it is a restated question, are you suggesting to include comments about why $d_2$ is the best measure of distance between functions? $\endgroup$ – SomeEE Feb 4 '14 at 20:00
$\begingroup$ The question asks for intuition in understanding the standard deviation. You have explained how it is the $L^2$ norm in some function space. Although that provides another mathematical formalization (and would be adequate intuition for a mathematician otherwise ignorant of the standard deviation), it seems to stop short of what the original poster was requesting. What would be most welcome is a follow-up paragraph explaining the "meaning of the distance function $d_2$" and elaborating, if only a little, on the senses in which it is a "best" measure of distance. $\endgroup$ – whuber♦ Feb 4 '14 at 20:11
Not the answer you're looking for? Browse other questions tagged standard-deviation intuition or ask your own question.
Mean absolute deviation vs. standard deviation
What could it mean to "Rotate" a distribution?
Intuition for Euclidean Distance as a measure of spread (Standard Deviation)
Standard deviation of a cluster
Standard deviation of binned observations
Taking equipment precision into standard deviation
Median Absolute Deviation vs Standard Deviation
Standard deviation of a Bernoulli distribution?
What is the difference between population standard deviation, sample standard deviation, and standard error?
|
CommonCrawl
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.